Space Transformation Patents (Class 345/427)
  • Patent number: 11948242
    Abstract: Methods and apparatuses are described for intelligent smoothing of 3D alternative reality applications for secondary 2D viewing. A computing device receives a first data set corresponding to a first position of an alternative reality viewing device. The computing device generates a 3D virtual environment for display on the alternative reality viewing device using the first data set, and a 2D rendering of the virtual environment for display on a display device using the first data set. The computing device receives a second data set corresponding to a second position of the alternative reality viewing device after movement of the alternative reality viewing device. The computing device determines whether a difference between the first data set and the second data set is above a threshold. The computing device updates the 2D rendering of the virtual environment on the display device using the second data set, when the difference is above the threshold value.
    Type: Grant
    Filed: August 27, 2021
    Date of Patent: April 2, 2024
    Assignee: FMR LLC
    Inventors: Adam Schouela, David Martin, Brian Lough, James Andersen, Cecelia Brooks
  • Patent number: 11928783
    Abstract: Aspects of the present disclosure involve a system for presenting AR items. The system performs operations including receiving a video that includes a depiction of one or more real-world objects in a real-world environment and obtaining depth data related to the real-world environment. The operations include generating a three-dimensional (3D) model of the real-world environment based on the video and the depth data and adding an augmented reality (AR) item to the video based on the 3D model of the real-world environment. The operations include determining that the AR item has been placed on a vertical plane of the real-world environment and modifying an orientation of the AR item to correspond to an orientation of the vertical plane.
    Type: Grant
    Filed: December 30, 2021
    Date of Patent: March 12, 2024
    Assignee: Snap Inc.
    Inventors: Avihay Assouline, Itamar Berger, Gal Dudovitch, Peleg Harel, Gal Sasson
  • Patent number: 11922632
    Abstract: A human face data processing method according to an embodiment of the present disclosure includes acquiring a picture of a human face by means of a scanning apparatus, obtaining point cloud information by means of a structured light stripe, and further obtaining a three-dimensional model of the human face, and mapping the three-dimensional model onto a circular plane in an area-preserving manner so as to form a two-dimensional human face image. Three-dimensional data is converted into two-dimensional data, thereby facilitating data storage. In addition, the three-dimensional data uses the area-preserving manner, such that the restoration quality is better when the two-dimensional data is restored to the three-dimension data, thereby facilitating the re-utilization of a three-dimensional image.
    Type: Grant
    Filed: November 4, 2020
    Date of Patent: March 5, 2024
    Assignee: BEIJING GMINE VISION TECHNOLOGIES LTD.
    Inventors: Wei Chen, Boyang Wu
  • Patent number: 11921971
    Abstract: A live broadcasting recording equipment, a live broadcasting recording system and a live broadcasting recording method are provided. The live broadcasting recording equipment includes a camera, a processing device, and a terminal device. The camera captures images to provide photographic data. The processing device executes background removal processing on the photographic data to generate a person image. The terminal device communicates with the processing device and has a display. The processing device executes multi-layer processing to fuse the person image, a three-dimensional virtual reality background image, an augmented reality object image, and a presentation image, and generate a composite image. After an application gateway of the processing device recognizes a login operation of the terminal device, the processing device outputs the composite image to the terminal device, so that the display of the terminal device displays the composite image.
    Type: Grant
    Filed: April 11, 2022
    Date of Patent: March 5, 2024
    Assignee: Optoma China Co., Ltd
    Inventors: Kai-Ming Guo, Tian-Shen Wang, Zi-Xiang Xiao, Yi-Wei Lee
  • Patent number: 11915342
    Abstract: Systems, methods, and non-transitory computer-readable media can obtain data associated with a computer-based experience. The computer-based experience can be based on interactive real-time technology. At least one virtual camera can be configured within the computer-based experience in a real-time engine. Data associated with an edit cut of the computer-based experience can be obtained based on content captured by the at least one virtual camera. A plurality of shots that correspond to two-dimensional content can be generated from the edit cut of the computer-based experience in the real-time engine. Data associated with a two-dimensional version of the computer-based experience can be generated with the real-time engine based on the plurality of shots. The two-dimensional version can be rendered based on the generated data.
    Type: Grant
    Filed: July 15, 2022
    Date of Patent: February 27, 2024
    Assignee: Baobab Studios Inc.
    Inventors: Mikhail Stanislavovich Solovykh, Wei Wang, Nathaniel Christopher Dirksen, Lawrence David Cutler, Apostolos Lerios
  • Patent number: 11897394
    Abstract: A head up display for a vehicle including a display device configured to output light forming an image, an optical system configured to control a path of the light such that the image is output towards a light transmission region, and a controller configured to generate the image based on a first view and a second view such that a virtual image is produced on a ground surface in the light transmission region, the first view being towards the ground surface, the second view being towards a 3D space above the ground surface, the first view and the second view being based on an eye-box, the ground surface being in front of the vehicle, and the virtual image including a graphic object having a stereoscopic effect, and control the display device to output the image.
    Type: Grant
    Filed: July 20, 2021
    Date of Patent: February 13, 2024
    Assignee: NAVER LABS CORPORATION
    Inventors: Jae Won Cha, Jeseon Lee, Kisung Kim, Jongjin Park, Eunyoung Jeong, Yongho Shin
  • Patent number: 11900528
    Abstract: A method of rendering a view is disclosed. Three occlusion planes associated with an interior cavity of a three-dimensional object included in the view are created. The three occlusion planes are positioned based on a camera position and orientation. Any objects or parts of objects that are in a line of sight between the camera and any one of the three occlusion planes are culled. The view is rendered from the perspective of the camera.
    Type: Grant
    Filed: May 27, 2021
    Date of Patent: February 13, 2024
    Assignee: Unity IPR ApS
    Inventors: Andrew Peter Maneri, Donnavon Troy Webb, Jonathan Randall Newberry
  • Patent number: 11887499
    Abstract: The present invention relates a virtual-scene-based language-learning system, at least comprising a scheduling and managing module and a scene-editing module, and the system further comprises an association-analyzing module, the scheduling and managing module are connected to the scene-editing module and the association-analyzing module, respectively, in a wired or wireless manner, the association-analyzing module analyzes based on second-language information input by a user and provides at least one associated image and/or picture, and the association-analyzing module displays the associated images and/or picture selected by the user on a client, so that a teacher at the client is able to understand the language information expressed in the second language by the student based on the associated image and/or picture.
    Type: Grant
    Filed: July 13, 2021
    Date of Patent: January 30, 2024
    Inventor: Ailin Sha
  • Patent number: 11889222
    Abstract: The present disclosure provides a system and method for creating at multilayer scene using a multiple visual input data. And injecting an image of an actor into the multilayer scene to produce a output video approximating a three-dimensional space which signifies depth by visualizing the actor in front of some layers and behind others. This is very useful for many situations where the actor needs to be on a display with other visual items but in a way that does not overlap or occlude those items. A user interacts with other virtual objects or items in a scene or even with other users visualized in the scene.
    Type: Grant
    Filed: July 22, 2021
    Date of Patent: January 30, 2024
    Inventor: Malay Kundu
  • Patent number: 11875583
    Abstract: The present invention belongs to the technical field of 3D reconstruction in the field of computer vision, and provides a dataset generation method for self-supervised learning scene point cloud completion based on panoramas. Pairs of incomplete point cloud and target point cloud with RGB information and normal information can be generated by taking RGB panoramas, depth panoramas and normal panoramas in the same view as input for constructing a self-supervised learning dataset for training of the scene point cloud completion network. The key points of the present invention are occlusion prediction and equirectangular projection based on view conversion, and processing of the stripe problem and point-to-point occlusion problem during conversion. The method of the present invention includes simplification of the collection mode of the point cloud data in a real scene; occlusion prediction idea of view conversion; and design of view selection strategy.
    Type: Grant
    Filed: November 23, 2021
    Date of Patent: January 16, 2024
    Assignee: DALIAN UNIVERSITY OF TECHNOLOGY
    Inventors: Xin Yang, Tong Li, Baocai Yin, Zhaoxuan Zhang, Boyan Wei, Zhenjun Du
  • Patent number: 11875012
    Abstract: The technology disclosed relates to positioning and revealing a control interface in a virtual or augmented reality that includes causing display of a plurality of interface projectiles at a first region of a virtual or augmented reality. Input is received that is interpreted as user interaction with an interface projectile. User interaction includes selecting and throwing the interface projectile in a first direction. An animation of the interface projectile is displayed along a trajectory in the first directions to a place where it lands. A blooming of the control interface blooming from the interface projectile at the place where it lands is displayed.
    Type: Grant
    Filed: May 21, 2019
    Date of Patent: January 16, 2024
    Assignee: Ultrahaptics IP Two Limited
    Inventor: Nicholas James Benson
  • Patent number: 11854115
    Abstract: A vectorized caricature avatar generator receives a user image from which face parameters are generated. Segments of the user image including certain facial features (e.g., hair, facial hair, eyeglasses) are also identified. Segment parameter values are also determined, the segment parameter values being those parameter values from a set of caricature avatars that correspond to the segments of the user image. The face parameter values and the segment parameter values are used to generate a caricature avatar of the user in the user image.
    Type: Grant
    Filed: November 4, 2021
    Date of Patent: December 26, 2023
    Assignee: Adobe Inc.
    Inventors: Daichi Ito, Yijun Li, Yannick Hold-Geoffroy, Koki Madono, Jose Ignacio Echevarria Vallespi, Cameron Younger Smith
  • Patent number: 11856297
    Abstract: A panoramic video camera comprises a plurality of image sensors which are configured to capture a plurality of frames at a time; an image processing circuitry configured to generate a frame read signal to read the plurality of frames generated by the plurality of camera sensors, apply a cylindrical mapping function to map the plurality of frames to a cylindrical image plane and stitch the cylindrically mapped plurality of frames together in the cylindrical image plane based on a plurality of projection parameters.
    Type: Grant
    Filed: April 1, 2019
    Date of Patent: December 26, 2023
    Assignee: GN AUDIO A/S
    Inventor: Yashket Gupta
  • Patent number: 11842444
    Abstract: Embodiments include systems and methods for visualizing the position of a capturing device within a 3D mesh, generated from a video stream from the capturing device. A capturing device may provide a video stream along with point cloud data and camera pose data. This video stream, point cloud data, and camera pose data are then used to progressively generate a 3D mesh. The camera pose data and point cloud data can further be used, in conjunction with a SLAM algorithm, to indicate the position and orientation of the capturing device within the generated 3D mesh.
    Type: Grant
    Filed: June 2, 2021
    Date of Patent: December 12, 2023
    Assignee: STREEM, LLC
    Inventors: Sean M. Adkinson, Teressa Chizeck, Ryan R. Fink
  • Patent number: 11838486
    Abstract: In one implementation, a method of performing perspective correction is performed at a head-mounted device including one or more processors, non-transitory memory, an image sensor, and a display. The method includes capturing, using the image sensor, a plurality of images of a scene from a respective plurality of perspectives. The method includes capturing, using the image sensor, a current image of the scene from a current perspective. The method includes obtaining a depth map of the current image of the scene. The method include transforming, using the one or more processors, the current image of the scene based on the depth map, a difference between the current perspective of the image sensor and a current perspective of a user, and at least one of the plurality of images of the scene from the respective plurality of perspectives. The method includes displaying, on the display, the transformed image.
    Type: Grant
    Filed: July 13, 2020
    Date of Patent: December 5, 2023
    Assignee: APPLE INC.
    Inventors: Samer Samir Barakat, Bertrand Nepveu, Vincent Chapdelaine-Couture
  • Patent number: 11830148
    Abstract: A mixed reality (MR) simulation system includes a console and a head mounted device (HMD). The MR system captures stereoscopic images from a real-world environment using outward-facing stereoscopic cameras mounted to the HMD. The MR system preprocesses the stereoscopic images to maximize contrast and then extracts a set of features from those images, including edges or corners, among others. For each feature, the MR system generates one or more two-dimensional (2D) polylines. Then, the MR system triangulates between 2D polylines found in right side images and corresponding 2D polylines found in left side images to generate a set of 3D polylines. The MR system interpolates between 3D vertices included in the 3D polylines or extrapolates additional 3D vertices, thereby generating a geometric reconstruction of the real-world environment. The MR system may map textures derived from the real-world environment onto the geometric representation faster than the geometric reconstruction is updated.
    Type: Grant
    Filed: July 30, 2020
    Date of Patent: November 28, 2023
    Assignee: Meta Platforms, Inc.
    Inventors: James Allan Booth, Gaurav Chaurasia, Alexandru-Eugen Ichim, Alex Locher, Gioacchino Noris, Alexander Sorkine Hornung, Manuel Werlberger
  • Patent number: 11816782
    Abstract: Systems can identify visible surfaces for pixels in an image (portion) to be rendered. A sampling pattern of ray directions is applied to the pixels, so that the sampling pattern of ray directions repeats, and with respect to any pixel, the same ray direction can be found in the same relative position, with respect to that pixel, as for other pixels. Rays are emitted from visible surfaces in the respective ray direction supplied from the sampling pattern. Ray intersections can cause shaders to execute and contribute results to a sample buffer. With respect to shading of a given pixel, ray results from a selected subset of the pixels are used; the subset is selected by identifying a set of pixels, collectively from which rays were traced for the ray directions in the pattern, and requiring that surfaces from which rays were traced for those pixels satisfy a similarity criteria.
    Type: Grant
    Filed: March 2, 2022
    Date of Patent: November 14, 2023
    Assignee: Imagination Technologies Limited
    Inventors: Gareth Morgan, Luke T. Peterson
  • Patent number: 11804011
    Abstract: Disclosed is a method and apparatus for enabling interactive visualization of three-dimensional volumetric models. The method involves maintaining three-dimensional volumetric models represented by explicit surfaces. In accordance with an embodiment of the disclosure, the method also involves, for a current point of view, generating and displaying images of the volumetric models in a manner that clarifies internal structures by accounting for light attenuation inside the volumetric models as a function of spatial positions of the explicit surfaces. The method also involves, upon receiving user input that adjusts a display variable, repeating the generating and the displaying of the images in accordance with the display variable that has been adjusted, thereby enabling interactive visualization of the volumetric models while simultaneously clarifying the internal structures by accounting for the light attenuation inside the volumetric models.
    Type: Grant
    Filed: September 15, 2021
    Date of Patent: October 31, 2023
    Assignee: LlamaZOO Interactive Inc.
    Inventors: Charles Lavigne, Li Jl
  • Patent number: 11777616
    Abstract: A method and arrangement for testing wireless connections is provided. The method comprises obtaining (500) a three-dimensional model of a given environment; obtaining (502) ray tracing calculations describing propagation of radio frequency signals in the given environment; locating (504) one or more devices in the given environment; determining (506) utilising ray tracing calculations the radio frequency signal properties of one or more devices communicating with the device under test; transmitting (508) control information to the radio frequency controller unit for updating the connections between one or more devices and a set of antenna elements to match with the determined properties; obtaining (510) information on the location and propagation environment of the one or more devices and updating (512) the radio frequency signal properties of the one or more devices if the location or propagation environment changes.
    Type: Grant
    Filed: December 13, 2022
    Date of Patent: October 3, 2023
    Assignee: Nokia Solutions and Networks Oy
    Inventors: Juha Hannula, Marko Koskinen, Petri Koivukangas, Iikka Finning
  • Patent number: 11770495
    Abstract: Systems and methods for generating a virtual view of a virtual camera based on an input image are described. A system for generating a virtual view of a virtual camera based on an input image can include a capturing device including a physical camera and a depth sensor. The system also includes a controller configured to determine an actual pose of the capturing device; determine a desired pose of the virtual camera for showing the virtual view; define an epipolar geometry between the actual pose of the capturing device and the desired pose of the virtual camera; and generate a virtual image depicting objects within the input image according to the desired pose of the virtual camera for the virtual camera based on an epipolar relation between the actual pose of the capturing device, the input image, and the desired pose of the virtual camera.
    Type: Grant
    Filed: August 13, 2021
    Date of Patent: September 26, 2023
    Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Michael Slutsky, Albert Shalumov
  • Patent number: 11744652
    Abstract: Various embodiments of an apparatus, methods, systems and computer program products described herein are directed to Field Visualization Engine. The Field Visualization Engine tracks one or more collimator poses relative to one or more Augmented Reality (AR) headset device poses. Each respective collimator pose and each respective headset device pose corresponds to a three-dimensional (3D) unified coordinate space (“3D space”). The Field Visualization Engine generates an AR representation of a beam emanating from the collimator based at least on a current collimator pose and a current headset device pose. The Field Visualization Engine further generates an AR visualization of emanation of the beam throughout an AR display of medical data.
    Type: Grant
    Filed: July 22, 2022
    Date of Patent: September 5, 2023
    Assignee: Medivis, Inc.
    Inventors: Long Qian, Christopher Morley, Osamah Choudhry
  • Patent number: 11733041
    Abstract: An apparatus and method are presented comprising one or more sensors or cameras configured to rotate about a central motor. In some examples, the motor is configured to travel at a constant linear speed while the one or more cameras face downward and collect a set of images in a predetermined region of interest. The apparatus and method are configured for image acquisition with non-sequential image overlap. The apparatus and method are configured to eliminate gaps in image detection for fault-proof collection of imagery for an underwater survey. In some examples, long baseline (LBL) is utilized for mapping detected images to a location. In some examples, ultra-short baseline (USBL) is utilized for mapping detected images to a location. The apparatus and method are configured to utilize a simultaneous localization and mapping (SLAM) approach.
    Type: Grant
    Filed: August 2, 2021
    Date of Patent: August 22, 2023
    Assignee: University of New Hampshire
    Inventor: Yuri Rzhanov
  • Patent number: 11722768
    Abstract: A method and an apparatus for controlling a camera, and a medium and an electronic device are disclosed. The method includes: acquiring head portrait information of a teacher in a video frame image of a live classroom in real time; analyzing the head portrait information to acquire organ identification information of each organ, wherein the organ identification information is used to indicate whether the organ exists; determining an orientation type of a face in the head portrait information based on the organ identification information, wherein the orientation type comprises a forward type, a lateral type, and a backward type; controlling the camera to focus on the teacher in response to the orientation type being the forward type; and controlling the camera to focus on a blackboard in response to the orientation type being the backward type or the lateral type.
    Type: Grant
    Filed: July 29, 2021
    Date of Patent: August 8, 2023
    Assignee: BEIJING AMBOW SHENGYING EDUCATION AND TECHNOLOGY CO., LTD.
    Inventors: Jin Huang, Gang Huang, Kesheng Wang, Yin Yao, Qiaoling Xu
  • Patent number: 11721048
    Abstract: The present disclosure relates to an imaging processing apparatus and method by which degradation of the quality due to two-dimensional projection of 3D data can be suppressed. All pieces of data for each position included in 3D data representative of a three-dimensional structure are projected to a two-dimensional plane of plural layers. Further, all pieces of data for each position of 3D data projected to a two-dimensional plane having the number of layers indicated by layer number information are projected to a three-dimensional space. The present disclosure can be applied, for example, to an information processing apparatus, an image processing apparatus, electronic equipment, an information processing method, and a program.
    Type: Grant
    Filed: January 7, 2019
    Date of Patent: August 8, 2023
    Assignee: SONY CORPORATION
    Inventors: Ohji Nakagami, Koji Yano, Satoru Kuma, Tsuyoshi Kato
  • Patent number: 11692844
    Abstract: A display apparatus for a vehicle includes: a controller configured to create map information; and a display device configured to display the map information created by the controller, wherein the controller controls the display device to display a path guidance texture based on a road shape when guiding a path among the map information.
    Type: Grant
    Filed: August 25, 2020
    Date of Patent: July 4, 2023
    Assignees: Hyundai Motor Company, Kia Motors Corporation
    Inventors: Su Jin Kwon, Bum Hee Chung, Paul Choo
  • Patent number: 11677925
    Abstract: An information processing apparatus, which transmits, to an image processing apparatus for generating a virtual viewpoint image, at least some of a plurality of images based on image capturing from a plurality of different directions, obtains an image based on image capturing by an image capturing apparatus, obtains camera viewpoint information about at least one of a position and orientation of the image capturing apparatus, obtains virtual viewpoint information about at least one of a position and orientation of the virtual viewpoint, reduces an information amount of the obtained image based on the camera viewpoint information and the virtual viewpoint information, and transmits the image with the reduced information amount to the image processing apparatus.
    Type: Grant
    Filed: August 27, 2021
    Date of Patent: June 13, 2023
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Daichi Adachi
  • Patent number: 11670039
    Abstract: Bordering pixels delineating a texture hole region are identified in a target image. Depth values of the bordering pixels are automatically clustered into two depth value clusters. A specific estimation direction is selected from multiple candidate estimation directions for a texture hole pixel in a texture hole region. A depth value of the texture hole pixel is estimated by interpolating depth values of two bordering background pixels in the specific estimation direction. The estimated depth value is used to warp the texture hole pixel into a reference view represented by a temporal reference image. A pixel value of the texture hole pixel is predicted based on a reference pixel value of a reference pixel from the reference image to which the texture hole pixel is warped using the estimated depth value.
    Type: Grant
    Filed: March 4, 2020
    Date of Patent: June 6, 2023
    Assignee: DOLBY LABORATORIES LICENSING CORPORATION
    Inventors: Wenhui Jia, Haricharan Lakshman, Ajit Ninan
  • Patent number: 11625896
    Abstract: A face modeling method and apparatus, an electronic device and a computer-readable medium. Said method comprises: acquiring multiple depth images, the multiple depth images being obtained by photographing a target face at different irradiation angles; performing alignment processing on the multiple depth images to obtain a target point cloud image; using the target point cloud image to construct a three-dimensional model of the target face. The present disclosure alleviates the technical problems of poor robustness and low precision of the three-dimensional model constructed according to the three-dimensional model constructing method.
    Type: Grant
    Filed: August 9, 2019
    Date of Patent: April 11, 2023
    Assignee: BEIJING KUANGSHI TECHNOLOGY CO., LTD.
    Inventors: Liang Qiao, Keqing Chen, Haibin Huang
  • Patent number: 11605184
    Abstract: A method of mapping 3D point cloud data into 2D surfaces for further efficient temporal coding is described herein. Point cloud global tetris packing utilizes 3D surface patches to represent point clouds and performs temporally consistent global mapping of 3D patch surface data into 2D canvas images.
    Type: Grant
    Filed: July 26, 2021
    Date of Patent: March 14, 2023
    Assignee: SONY CORPORATION
    Inventor: Danillo Graziosi
  • Patent number: 11593995
    Abstract: Various implementations disclosed herein include devices, systems, and methods for generating variations of an object. In various implementations, a device includes a display, a non-transitory memory and one or more processors coupled with the display and the non-transitory memory. In some implementations, a method includes obtaining a request to populate an environment with variations of an object characterized by at least one visual property. In some implementations, the method includes generating the variations of the object by assigning corresponding values for the at least one visual property based on one or more distribution criterion. In some implementations, the method includes displaying the variations of the object in the setting in order to satisfy a presentation criterion.
    Type: Grant
    Filed: January 22, 2021
    Date of Patent: February 28, 2023
    Assignee: APPLE INC.
    Inventors: Stuart Hari Ferguson, Richard Ignatius Punsal Lozada, James Graham McCarter
  • Patent number: 11561651
    Abstract: A method and an apparatus for implementing a virtual paintbrush are provided. The method includes: acquiring a real shooting scene by a camera of a terminal device; forming, based on an operation performed on the terminal device, a handwriting area of the virtual paintbrush in the real shooting scene; and forming handwriting of the virtual paintbrush based on the handwriting area, where the handwriting is fused with the real shooting scene and a fused image is displayed on the terminal device.
    Type: Grant
    Filed: October 26, 2021
    Date of Patent: January 24, 2023
    Assignee: BEIJING BYTEDANCE NETWORK TECHNOLOGY CO., LTD.
    Inventor: Yi Chen
  • Patent number: 11556584
    Abstract: There is disclosed a system, apparatus and methods for optimizing photo selection. When a photographer takes photos as requested in a shot list, the photos are automatically assigned a quality score which correlates to how prominently the photo would be displayed in an online search. The photos and the quality scores are displayed to the photographer so that when the photographer has shot a sufficiently high quality photo then the photographer can stop shooting. Photos with the highest quality scores are optimal. The shot lists include reference photos, and if a new photo has a higher quality score than the corresponding reference photo, the new photo becomes the reference photo.
    Type: Grant
    Filed: March 31, 2022
    Date of Patent: January 17, 2023
    Assignee: Aircam Inc.
    Inventors: Evan Rifkin, Ryan Rifkin, David Hopkins, Jonathan Angelo, Marcus Buffet
  • Patent number: 11558598
    Abstract: A control apparatus which controls a virtual camera according to a user operation related to an operation of the virtual camera, when the control apparatus accepts the user operation, determines whether or not to restrict moving of the virtual camera according to the accepted user operation depending on whether or not a predetermined condition for the virtual camera is fulfilled.
    Type: Grant
    Filed: February 23, 2021
    Date of Patent: January 17, 2023
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Tomohiro Yano
  • Patent number: 11556220
    Abstract: Embodiments of a 3D web interaction system are disclosed that allow a user to select a content item from a browser, displayed in an artificial reality environment, and present a corresponding version of the content item in the artificial reality environment. The 3D web interaction system can create the version of the selected content item in different ways depending on whether the selected content item is associated with 3D content and, if so, the type of the associated 3D content. For example, the 3D web interaction system can create and present different versions of the selected content item depending on whether the selected content item is (a) not associated with 3D content, (b) associated with “environment content,” or (c) associated with one or more 3D models.
    Type: Grant
    Filed: July 6, 2021
    Date of Patent: January 17, 2023
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Joshua Jacob Inch, Reilly Donovan, Diana Min Liao, Justin Rogers
  • Patent number: 11553123
    Abstract: Techniques in connection with a light field camera array are disclosed, involving generating a temperature data for an imaging camera included in an imaging camera array for a first time, obtaining an image data from the imaging camera, generating temperature-based correction parameters corresponding to the temperature data based on at least a stored temperature calibration data; and producing corrected image data by applying a geometric distortion correction and/or color correction indicated by the temperature-based correction parameters to the image data.
    Type: Grant
    Filed: July 18, 2019
    Date of Patent: January 10, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventor: Ross Garrett Cutler
  • Patent number: 11551402
    Abstract: A computer-implemented method is provided for visualizing multiple objects in a computerized visual environment. The method includes displaying to a user a virtual three-dimensional space via a viewing device worn by the user, and determining a data limit of the viewing device for object rendering. The method includes presenting an initial rendering of the objects within the virtual space, where the visualization data used for the initial rendering does not exceed the data limit of the viewing device. The method also includes tracking user attention relative to the objects as the user navigates through the virtual space and determining, based on the tracking of user attention, one or more select objects from the multiple objects to which the user is paying attention. The one or more select objects are located within a viewing range of the user.
    Type: Grant
    Filed: July 20, 2021
    Date of Patent: January 10, 2023
    Assignee: FMR LLC
    Inventors: David Martin, Adam Schouela, Jason Mcevoy
  • Patent number: 11544894
    Abstract: A method includes the steps of receiving training data comprising images of an object and associated camera poses from which the images are captured, training, based on the training data, a machine-learning model to take as input a given viewpoint and synthesize an image of a virtual representation of the object viewed from the given viewpoint, generating, for each of predetermined viewpoints surrounding the virtual representation of the object, a view-dependent image of the object as viewed from that viewpoint using the trained machine-learning model, receiving, from a client device, a desired viewpoint from which to view the virtual representation of the object, selecting one or more of the predetermined viewpoints based on the desired viewpoint, and sending, to the client device, the view-dependent images associated with the selected one or more viewpoints for rendering an output image of the virtual representation of the object viewed from the desired viewpoint.
    Type: Grant
    Filed: February 26, 2021
    Date of Patent: January 3, 2023
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Brian Funt, Reza Nourai, Volga Aksoy, Zeyar Htet
  • Patent number: 11544778
    Abstract: The disclosure extends to methods, systems, and computer program products for producing financial goal planning having two dimensional and three dimensional graphical representations for financial goals.
    Type: Grant
    Filed: September 9, 2014
    Date of Patent: January 3, 2023
    Assignee: MX TECHNOLOGIES, INC.
    Inventors: John Ryan Caldwell, Ronald Brennan Knotts, Jonathan R. Hopkins
  • Patent number: 11543551
    Abstract: Disclosed are methods of marine 3D seismic data acquisition that do not require compensation for winds and currents.
    Type: Grant
    Filed: March 11, 2019
    Date of Patent: January 3, 2023
    Assignee: SHEARWATER GEOSERVICES SOFTWARE INC.
    Inventors: Peter M. Eick, Joel D. Brewer, Charles Clinton Mosher
  • Patent number: 11521351
    Abstract: In an example, a method includes acquiring, at a processor, a data model of an object to be generated in additive manufacturing, the data model comprising object model data representing a slice of the object model as a plurality of polygons and object property data comprising property data associated with the plurality of polygons. The slice may be inspected from a predetermined perspective at a plurality of discrete locations. It may be determined if each location is within a face of a polygon, and if so, the object property data associated with that polygon may be identified and associated with that location. The slice may further be inspected at a plurality of discrete locations along an edge of a polygon, the object property data associated with each location may be identified and associated with that location.
    Type: Grant
    Filed: July 10, 2017
    Date of Patent: December 6, 2022
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Josh Shepherd, Matthew A Shepherd
  • Patent number: 11521395
    Abstract: The present technology relates to an image processing device, an image processing method, and a program capable of making it easier to recognize standing objects. Movement transformation of moving a subject position where a subject appears in a target image to be processed is performed, depending on a subject distance from a vanishing point in the target image to the subject position. The present technology can be applied to, for example, the image processing and the like of an image taken by a camera unit onboard a vehicle or other moving body.
    Type: Grant
    Filed: May 16, 2018
    Date of Patent: December 6, 2022
    Assignee: Sony Semiconductor Solutions Corporation
    Inventor: Satoshi Nakayama
  • Patent number: 11508126
    Abstract: Methods and systems of rendering a pathway for a virtual tour of a predefined premises are disclosed. A method includes receiving a three-dimensional floor plan of the predefined premises, generating the virtual tour of the predefined premises based on the three-dimensional floor plan, identifying a plurality of pathways within the three-dimensional floor plan for exploring the predefined premises, and receiving details pertaining to a position and orientation of the user during the virtual tour. The position and the orientation are detected by at least one sensor of a Virtual Reality (VR) enabled device of the user. The method includes selecting a pathway based on the position and the orientation of the user, and rendering the pathway to the VR-enabled device for the virtual tour.
    Type: Grant
    Filed: November 15, 2019
    Date of Patent: November 22, 2022
    Inventor: Arvind Prakash Bangalore Sadashiv Singh
  • Patent number: 11494067
    Abstract: The information processing apparatus of the present invention is an information processing apparatus that outputs viewpoint information for generation of a virtual viewpoint image based on image data obtained by performing image capturing from directions different from one another by a plurality of image capturing apparatuses and comprises: an acquisition unit configured to acquire viewpoint information having a plurality of virtual viewpoint parameter sets respectively indicating positions and orientations of a virtual viewpoint at a plurality of points in time; a change unit configured to change a virtual viewpoint parameter set included in the viewpoint information based on a user operation during playback of a virtual viewpoint image in accordance with viewpoint information acquired by the acquisition unit; and an output unit configured to output viewpoint information having a virtual viewpoint parameter set changed by the change unit.
    Type: Grant
    Filed: August 9, 2019
    Date of Patent: November 8, 2022
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Kazuhiro Matsubayashi
  • Patent number: 11496708
    Abstract: A video conference system including a transmitter device and a receiver device is provided. The transmitter device includes a transmitter control unit, a transmitter input interface, a transmitter video circuit and a first wireless transmission module. The transmitter control unit is coupled to a video output port of an information system and receive a first video data from the video output port. The transmitter input interface receives a second video data from a first video source. The transmitter video circuit combines the first video data and the second video data as a combined video data. The first wireless transmission module transmits the combined video data to the receiver device. The receiver device, coupled to the display device, includes a second wireless transmission module, which receives the combined video data. The receiver device transmits the combined video data to the display device.
    Type: Grant
    Filed: December 30, 2020
    Date of Patent: November 8, 2022
    Assignee: BenQ Corporation
    Inventors: Lin-Yuan You, Chen-Chi Wu
  • Patent number: 11480661
    Abstract: In an example embodiment, a process may select high density points from a point cloud. The process may create one or more clusters from the high density points and identify a circular cluster from the created clusters. The process may identify which points in the circular cluster are inner edge points and determine a center of an ellipse that fits the inner edge points. The process may define a search space utilizing the center of the ellipse. The process may determine the estimated x, y, and z coordinates for the position of the scanner in the search space utilizing a non-linear least square solver with different combinations of a relationship that is true for any pair of points of the cluster. An application may utilize the determined position with an object/file format (e.g., LSA format) to generate a high resolution 3D mesh of a scene.
    Type: Grant
    Filed: August 2, 2019
    Date of Patent: October 25, 2022
    Assignee: Bentley Systems, Incorporated
    Inventors: Cyril Novel, Alexandre Gbaguidi Aisse
  • Patent number: 11481529
    Abstract: A method is provided for dimensioning a cross section of a structural product, the cross section having an arbitrary shape. The method includes defining and thereby producing a first definition of the cross section, and accessing template cross sections of various shapes, the template cross sections having respective second definitions of the template cross sections. The method includes performing a comparison of the first definition of the cross section and the respective second definitions of the template cross sections. The method includes identifying a matching one of the template cross sections based on the comparison, the matching one of the template cross sections further having respective locations from which the matching one of the template cross sections is dimensioned. And the method includes applying the respective locations to the cross section, and dimensioning the cross section from the respective locations.
    Type: Grant
    Filed: April 24, 2020
    Date of Patent: October 25, 2022
    Assignee: The Boeing Company
    Inventors: Samir Abad, Sucheth Misquith, Sameer Kate, Linza Varghese, Maher James Chinnathurai
  • Patent number: 11468609
    Abstract: The techniques described herein relate to methods, apparatus, and computer readable media configured to generate point cloud histograms. A one-dimensional histogram can be generated by determining a distance to a reference for each 3D point of a 3D point cloud. A one-dimensional histogram is generated by adding, for each histogram entry, distances that are within the entry's range of distances. A two-dimensional histogram can be determined by generating a set of orientations by determining, for each 3D point, an orientation with at least a first value for a first component and a second value for a second component. A two-dimensional histogram can be generated based on the set of orientations. Each bin can be associated with ranges of values for the first and second components. Orientations can be added for each bin that have first and second values within the first and second ranges of values, respectively, of the bin.
    Type: Grant
    Filed: May 10, 2021
    Date of Patent: October 11, 2022
    Assignee: Cognex Corporation
    Inventors: Hongwei Zhu, David J. Michael, Nitin M. Vaidya
  • Patent number: 11468584
    Abstract: There is provided a depth information generating apparatus. A first generating unit generates first depth information on the basis of a plurality of viewpoint images which are obtained from first shooting and which have mutually-different viewpoints. A second generating unit generates second depth information for a captured image obtained from second shooting by correcting the first depth information so as to reflect a change in depth caused by a difference in a focal distance of the second shooting relative to the first shooting.
    Type: Grant
    Filed: August 6, 2020
    Date of Patent: October 11, 2022
    Assignee: CANON KABUSHIKI KAISHA
    Inventors: Yohei Horikawa, Takeshi Ogawa
  • Patent number: 11451758
    Abstract: In one embodiment, a computing system may access a first grayscale image and a second grayscale image. The system may generate a first color image and a second color image based on the first grayscale image and the second grayscale image, respectively. The system may generate affinity information based on the first grayscale image and the second grayscale image, the affinity information identifying relationships between pixels of the first grayscale image and pixels of the second grayscale image. The system may modify the color of the first color image and the second color image based on the affinity information. The system may generate a first visual output based on the modified first color image and a second visual output based on the modified second color image.
    Type: Grant
    Filed: February 12, 2020
    Date of Patent: September 20, 2022
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Gaurav Chaurasia, Alexander Sorkine Hornung, David Novotny, Nikola Dodik
  • Patent number: 11450019
    Abstract: A computing system is configured to train an object classifier. Monocular image data and ground-truth data are received for a scene. Geometric context is determined including a three-dimensional camera position relative to a fixed plane. Regions of interest (RoI) and a set of potential occluders are identified within the image data. For each potential occluder, an occlusion zone is projected onto the fixed plane in three-dimensions. A set of occluded RoIs on the fixed plane are generated for each occlusion zone. Each occluded RoI is projected back to the image data in two-dimensions. The classifier is trained by minimizing a loss function generated by inputting information regarding the RoIs and the occluded RoIs into the classifier, and by minimizing location errors of each RoI and each occluded RoI of the set on the fixed plane based on the ground-truth data. The trained classifier is then output for object detection.
    Type: Grant
    Filed: October 22, 2020
    Date of Patent: September 20, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Ishani Chakraborty, Gang Hua