Space Transformation Patents (Class 345/427)
  • Patent number: 12041388
    Abstract: The invention is directed towards a system where a head-mounted display (HMD), or other head mounted computing device, may annotate video data captured by the HMD. When a video is annotated on the computer processor, the data packet associated with that digital marker is encoded as an operation-encoded audio packet and sent over a secure audio link to the HMD. Sending the operation-encoded audio packet over the secure audio link requires the HMD to decode the packet into a data packet which may then be used to annotate a display on the HMD. The user of the HMD may be able to view his own recorded field of view on a display device which may then be annotated using the data in the data packet.
    Type: Grant
    Filed: June 8, 2022
    Date of Patent: July 16, 2024
    Assignee: REALWEAR, INC.
    Inventor: Chris Parkinson
  • Patent number: 12032802
    Abstract: This invention relates to panning in a three dimensional environment on a mobile device. In an embodiment, a computer-implemented method for navigating a virtual camera in a three dimensional environment on a mobile device having a touch screen. A user input is received indicating that an object has touched a first point on a touch screen of the mobile device and the object has been dragged to a second point on the touch screen. A first target location in the three dimensional environment is determined based on the first point on the touch screen. A second target location in the three dimensional environment is determined based on the second point on the touch screen. Finally, a three dimensional model is moved in the three dimensional environment relative to the virtual camera according to the first and second target locations.
    Type: Grant
    Filed: July 2, 2021
    Date of Patent: July 9, 2024
    Assignee: GOOGLE LLC
    Inventor: David Kornmann
  • Patent number: 12025457
    Abstract: Systems, methods, and non-transitory computer readable media configured to provide three-dimensional representations of routes. Locations for a planned movement may be obtained. The location information may include tridimensional information of a location. Route information for the planned movement may be obtained. The route information may define a route of one or more entities within the location. A three-dimensional view of the route within the location may be determined based on the location information and the route information. An interface through which the three-dimensional view of the route within the location is accessible may be provided.
    Type: Grant
    Filed: February 21, 2023
    Date of Patent: July 2, 2024
    Assignee: Palantir Technologies Inc.
    Inventors: Richard Dickson, Mason Cooper, Quentin Le Pape
  • Patent number: 12028644
    Abstract: Various embodiments of an apparatus, method(s), system(s) and computer program product(s) described herein are directed to a Scaling Engine. The Scaling Engine identifies a background object portrayed in a background template for a video feed. The Scaling Engine determines a background template display position for concurrent display of the background object with video feed data. The Scaling Engine generates a scaled background template by modifying a current aspect ratio of the background template with the background object set at the background display position according to a video feed aspect ratio. The Scaling Engine generates a merged video feed by merging the scaled background template with live video feed data, the merged video feed data providing an unobstructed portrayal of the identified background object.
    Type: Grant
    Filed: May 19, 2022
    Date of Patent: July 2, 2024
    Assignee: Zoom Video Communications, Inc.
    Inventor: Thanh Le Nguyen
  • Patent number: 12008230
    Abstract: The present disclosure generally describe user interfaces related to time. In accordance with embodiments, user interfaces for displaying and enabling an adjustment of a displayed time zone are described. In accordance with embodiments, user interfaces for initiating a measurement of time are described. In accordance with embodiments, user interfaces for enabling and displaying a user interface using a character are described. In accordance with embodiments, user interfaces for enabling and displaying a user interface that includes an indication of a current time are described. In accordance with embodiments, user interfaces for enabling configuration of a background for a user interface are described. In accordance with embodiments, user interfaces for enabling configuration of displayed applications on a user interface are described.
    Type: Grant
    Filed: September 24, 2020
    Date of Patent: June 11, 2024
    Assignee: Apple Inc.
    Inventors: Kevin Will Chen, Teofila Connor, Aurelio Guzman, Eileen Y. Lee, Christopher Wilson, Alan C. Dye
  • Patent number: 12001646
    Abstract: According to one aspect, it becomes possible to easily modify a 3D object which is displayed in a virtual reality space. A method is performed by a computer configured to be communicable with a position detection device that includes a drawing surface and that, in operation, detects a position of an electronic pen on the drawing surface. The method includes rendering, in a virtual reality space, a first object that is a 3D object, rendering, near the first object, a display surface that is a 3D object, rendering, on the display surface, a 3D line that is a 3D object generated based on the position of the electronic pen on the drawing surface, wherein the position of the electronic pen is detected by the position detection device, and outputting the first object, the display surface, and the 3D line, which are the 3D objects, to a display.
    Type: Grant
    Filed: January 12, 2023
    Date of Patent: June 4, 2024
    Assignee: Wacom Co., Ltd.
    Inventors: Hiroshi Fujioka, Naoya Nishizawa, Kenton J. Loftus, Milen Dimitrov Metodiev, Markus Weber, Anthony Ashton
  • Patent number: 12002161
    Abstract: Methods and apparatus for a map tool displaying a three-dimensional view of a map based on a three-dimensional model of the surrounding environment. The three-dimensional map view of a map may be based on a model constructed from multiple data sets, where the multiple data sets include mapping information for an overlapping area of the map displayed in the map view. For example, one data set may include two-dimensional data including object footprints, where the object footprints may be extruded into a three-dimensional object based on data from a data set composed of three-dimensional data. In this example, the three-dimensional data may include height information that corresponds to the two-dimensional object, where the height may be obtained by correlating the location of the two-dimensional object within the three-dimensional data.
    Type: Grant
    Filed: December 21, 2018
    Date of Patent: June 4, 2024
    Assignee: Apple Inc.
    Inventors: James A. Howard, Christopher Blumenberg
  • Patent number: 11983796
    Abstract: A method for processing an electronic image including receiving, by a viewer, the electronic image and a FOV (field of view), wherein the FOV includes at least one coordinate, at least one dimension, and a magnification factor, loading, by the viewer, a plurality of tiles within the FOV, determining, by the viewer, a state of the plurality of tiles in a cache, and in response to determining that the state of the plurality of tiles in the cache is a fully loaded state, rendering, by the viewer, the plurality of tiles to a display.
    Type: Grant
    Filed: August 4, 2022
    Date of Patent: May 14, 2024
    Assignee: Paige.AI, Inc.
    Inventors: Alexandre Kirszenberg, Razik Yousfi, Thomas Fresneau, Peter Schueffler
  • Patent number: 11972529
    Abstract: The disclosure concerns an augmented reality method in which visual information concerning a real-world object, structure or environment is gathered and a deformation operation is performed on that visual information to generate virtual content that may be displayed in place of, or additionally to, real-time captured image content of the real-world object, structure or environment. Some particular embodiments concern the sharing of visual environment data and/or information characterizing the deformation operation between client devices.
    Type: Grant
    Filed: February 1, 2019
    Date of Patent: April 30, 2024
    Assignee: SNAP INC.
    Inventor: David Li
  • Patent number: 11971274
    Abstract: There is provided a method for producing a high-definition (HD) map. The method includes detecting an object of a road area from the aerial image, extracting a two-dimensional (2D) coordinate value of the detected object, calculating a three-dimensional (3D) coordinate value corresponding to the 2D coordinate value by projecting the extracted 2D coordinate value onto point cloud data that configures the MMS data, and generating an HD map showing a road area of the aerial image in three dimensions based on the calculated 3D coordinate value.
    Type: Grant
    Filed: November 19, 2020
    Date of Patent: April 30, 2024
    Assignee: THINKWARE CORPORATION
    Inventor: Suk Pil Ko
  • Patent number: 11948242
    Abstract: Methods and apparatuses are described for intelligent smoothing of 3D alternative reality applications for secondary 2D viewing. A computing device receives a first data set corresponding to a first position of an alternative reality viewing device. The computing device generates a 3D virtual environment for display on the alternative reality viewing device using the first data set, and a 2D rendering of the virtual environment for display on a display device using the first data set. The computing device receives a second data set corresponding to a second position of the alternative reality viewing device after movement of the alternative reality viewing device. The computing device determines whether a difference between the first data set and the second data set is above a threshold. The computing device updates the 2D rendering of the virtual environment on the display device using the second data set, when the difference is above the threshold value.
    Type: Grant
    Filed: August 27, 2021
    Date of Patent: April 2, 2024
    Assignee: FMR LLC
    Inventors: Adam Schouela, David Martin, Brian Lough, James Andersen, Cecelia Brooks
  • Patent number: 11928783
    Abstract: Aspects of the present disclosure involve a system for presenting AR items. The system performs operations including receiving a video that includes a depiction of one or more real-world objects in a real-world environment and obtaining depth data related to the real-world environment. The operations include generating a three-dimensional (3D) model of the real-world environment based on the video and the depth data and adding an augmented reality (AR) item to the video based on the 3D model of the real-world environment. The operations include determining that the AR item has been placed on a vertical plane of the real-world environment and modifying an orientation of the AR item to correspond to an orientation of the vertical plane.
    Type: Grant
    Filed: December 30, 2021
    Date of Patent: March 12, 2024
    Assignee: Snap Inc.
    Inventors: Avihay Assouline, Itamar Berger, Gal Dudovitch, Peleg Harel, Gal Sasson
  • Patent number: 11921971
    Abstract: A live broadcasting recording equipment, a live broadcasting recording system and a live broadcasting recording method are provided. The live broadcasting recording equipment includes a camera, a processing device, and a terminal device. The camera captures images to provide photographic data. The processing device executes background removal processing on the photographic data to generate a person image. The terminal device communicates with the processing device and has a display. The processing device executes multi-layer processing to fuse the person image, a three-dimensional virtual reality background image, an augmented reality object image, and a presentation image, and generate a composite image. After an application gateway of the processing device recognizes a login operation of the terminal device, the processing device outputs the composite image to the terminal device, so that the display of the terminal device displays the composite image.
    Type: Grant
    Filed: April 11, 2022
    Date of Patent: March 5, 2024
    Assignee: Optoma China Co., Ltd
    Inventors: Kai-Ming Guo, Tian-Shen Wang, Zi-Xiang Xiao, Yi-Wei Lee
  • Patent number: 11922632
    Abstract: A human face data processing method according to an embodiment of the present disclosure includes acquiring a picture of a human face by means of a scanning apparatus, obtaining point cloud information by means of a structured light stripe, and further obtaining a three-dimensional model of the human face, and mapping the three-dimensional model onto a circular plane in an area-preserving manner so as to form a two-dimensional human face image. Three-dimensional data is converted into two-dimensional data, thereby facilitating data storage. In addition, the three-dimensional data uses the area-preserving manner, such that the restoration quality is better when the two-dimensional data is restored to the three-dimension data, thereby facilitating the re-utilization of a three-dimensional image.
    Type: Grant
    Filed: November 4, 2020
    Date of Patent: March 5, 2024
    Assignee: BEIJING GMINE VISION TECHNOLOGIES LTD.
    Inventors: Wei Chen, Boyang Wu
  • Patent number: 11915342
    Abstract: Systems, methods, and non-transitory computer-readable media can obtain data associated with a computer-based experience. The computer-based experience can be based on interactive real-time technology. At least one virtual camera can be configured within the computer-based experience in a real-time engine. Data associated with an edit cut of the computer-based experience can be obtained based on content captured by the at least one virtual camera. A plurality of shots that correspond to two-dimensional content can be generated from the edit cut of the computer-based experience in the real-time engine. Data associated with a two-dimensional version of the computer-based experience can be generated with the real-time engine based on the plurality of shots. The two-dimensional version can be rendered based on the generated data.
    Type: Grant
    Filed: July 15, 2022
    Date of Patent: February 27, 2024
    Assignee: Baobab Studios Inc.
    Inventors: Mikhail Stanislavovich Solovykh, Wei Wang, Nathaniel Christopher Dirksen, Lawrence David Cutler, Apostolos Lerios
  • Patent number: 11897394
    Abstract: A head up display for a vehicle including a display device configured to output light forming an image, an optical system configured to control a path of the light such that the image is output towards a light transmission region, and a controller configured to generate the image based on a first view and a second view such that a virtual image is produced on a ground surface in the light transmission region, the first view being towards the ground surface, the second view being towards a 3D space above the ground surface, the first view and the second view being based on an eye-box, the ground surface being in front of the vehicle, and the virtual image including a graphic object having a stereoscopic effect, and control the display device to output the image.
    Type: Grant
    Filed: July 20, 2021
    Date of Patent: February 13, 2024
    Assignee: NAVER LABS CORPORATION
    Inventors: Jae Won Cha, Jeseon Lee, Kisung Kim, Jongjin Park, Eunyoung Jeong, Yongho Shin
  • Patent number: 11900528
    Abstract: A method of rendering a view is disclosed. Three occlusion planes associated with an interior cavity of a three-dimensional object included in the view are created. The three occlusion planes are positioned based on a camera position and orientation. Any objects or parts of objects that are in a line of sight between the camera and any one of the three occlusion planes are culled. The view is rendered from the perspective of the camera.
    Type: Grant
    Filed: May 27, 2021
    Date of Patent: February 13, 2024
    Assignee: Unity IPR ApS
    Inventors: Andrew Peter Maneri, Donnavon Troy Webb, Jonathan Randall Newberry
  • Patent number: 11889222
    Abstract: The present disclosure provides a system and method for creating at multilayer scene using a multiple visual input data. And injecting an image of an actor into the multilayer scene to produce a output video approximating a three-dimensional space which signifies depth by visualizing the actor in front of some layers and behind others. This is very useful for many situations where the actor needs to be on a display with other visual items but in a way that does not overlap or occlude those items. A user interacts with other virtual objects or items in a scene or even with other users visualized in the scene.
    Type: Grant
    Filed: July 22, 2021
    Date of Patent: January 30, 2024
    Inventor: Malay Kundu
  • Patent number: 11887499
    Abstract: The present invention relates a virtual-scene-based language-learning system, at least comprising a scheduling and managing module and a scene-editing module, and the system further comprises an association-analyzing module, the scheduling and managing module are connected to the scene-editing module and the association-analyzing module, respectively, in a wired or wireless manner, the association-analyzing module analyzes based on second-language information input by a user and provides at least one associated image and/or picture, and the association-analyzing module displays the associated images and/or picture selected by the user on a client, so that a teacher at the client is able to understand the language information expressed in the second language by the student based on the associated image and/or picture.
    Type: Grant
    Filed: July 13, 2021
    Date of Patent: January 30, 2024
    Inventor: Ailin Sha
  • Patent number: 11875583
    Abstract: The present invention belongs to the technical field of 3D reconstruction in the field of computer vision, and provides a dataset generation method for self-supervised learning scene point cloud completion based on panoramas. Pairs of incomplete point cloud and target point cloud with RGB information and normal information can be generated by taking RGB panoramas, depth panoramas and normal panoramas in the same view as input for constructing a self-supervised learning dataset for training of the scene point cloud completion network. The key points of the present invention are occlusion prediction and equirectangular projection based on view conversion, and processing of the stripe problem and point-to-point occlusion problem during conversion. The method of the present invention includes simplification of the collection mode of the point cloud data in a real scene; occlusion prediction idea of view conversion; and design of view selection strategy.
    Type: Grant
    Filed: November 23, 2021
    Date of Patent: January 16, 2024
    Assignee: DALIAN UNIVERSITY OF TECHNOLOGY
    Inventors: Xin Yang, Tong Li, Baocai Yin, Zhaoxuan Zhang, Boyan Wei, Zhenjun Du
  • Patent number: 11875012
    Abstract: The technology disclosed relates to positioning and revealing a control interface in a virtual or augmented reality that includes causing display of a plurality of interface projectiles at a first region of a virtual or augmented reality. Input is received that is interpreted as user interaction with an interface projectile. User interaction includes selecting and throwing the interface projectile in a first direction. An animation of the interface projectile is displayed along a trajectory in the first directions to a place where it lands. A blooming of the control interface blooming from the interface projectile at the place where it lands is displayed.
    Type: Grant
    Filed: May 21, 2019
    Date of Patent: January 16, 2024
    Assignee: Ultrahaptics IP Two Limited
    Inventor: Nicholas James Benson
  • Patent number: 11856297
    Abstract: A panoramic video camera comprises a plurality of image sensors which are configured to capture a plurality of frames at a time; an image processing circuitry configured to generate a frame read signal to read the plurality of frames generated by the plurality of camera sensors, apply a cylindrical mapping function to map the plurality of frames to a cylindrical image plane and stitch the cylindrically mapped plurality of frames together in the cylindrical image plane based on a plurality of projection parameters.
    Type: Grant
    Filed: April 1, 2019
    Date of Patent: December 26, 2023
    Assignee: GN AUDIO A/S
    Inventor: Yashket Gupta
  • Patent number: 11854115
    Abstract: A vectorized caricature avatar generator receives a user image from which face parameters are generated. Segments of the user image including certain facial features (e.g., hair, facial hair, eyeglasses) are also identified. Segment parameter values are also determined, the segment parameter values being those parameter values from a set of caricature avatars that correspond to the segments of the user image. The face parameter values and the segment parameter values are used to generate a caricature avatar of the user in the user image.
    Type: Grant
    Filed: November 4, 2021
    Date of Patent: December 26, 2023
    Assignee: Adobe Inc.
    Inventors: Daichi Ito, Yijun Li, Yannick Hold-Geoffroy, Koki Madono, Jose Ignacio Echevarria Vallespi, Cameron Younger Smith
  • Patent number: 11842444
    Abstract: Embodiments include systems and methods for visualizing the position of a capturing device within a 3D mesh, generated from a video stream from the capturing device. A capturing device may provide a video stream along with point cloud data and camera pose data. This video stream, point cloud data, and camera pose data are then used to progressively generate a 3D mesh. The camera pose data and point cloud data can further be used, in conjunction with a SLAM algorithm, to indicate the position and orientation of the capturing device within the generated 3D mesh.
    Type: Grant
    Filed: June 2, 2021
    Date of Patent: December 12, 2023
    Assignee: STREEM, LLC
    Inventors: Sean M. Adkinson, Teressa Chizeck, Ryan R. Fink
  • Patent number: 11838486
    Abstract: In one implementation, a method of performing perspective correction is performed at a head-mounted device including one or more processors, non-transitory memory, an image sensor, and a display. The method includes capturing, using the image sensor, a plurality of images of a scene from a respective plurality of perspectives. The method includes capturing, using the image sensor, a current image of the scene from a current perspective. The method includes obtaining a depth map of the current image of the scene. The method include transforming, using the one or more processors, the current image of the scene based on the depth map, a difference between the current perspective of the image sensor and a current perspective of a user, and at least one of the plurality of images of the scene from the respective plurality of perspectives. The method includes displaying, on the display, the transformed image.
    Type: Grant
    Filed: July 13, 2020
    Date of Patent: December 5, 2023
    Assignee: APPLE INC.
    Inventors: Samer Samir Barakat, Bertrand Nepveu, Vincent Chapdelaine-Couture
  • Patent number: 11830148
    Abstract: A mixed reality (MR) simulation system includes a console and a head mounted device (HMD). The MR system captures stereoscopic images from a real-world environment using outward-facing stereoscopic cameras mounted to the HMD. The MR system preprocesses the stereoscopic images to maximize contrast and then extracts a set of features from those images, including edges or corners, among others. For each feature, the MR system generates one or more two-dimensional (2D) polylines. Then, the MR system triangulates between 2D polylines found in right side images and corresponding 2D polylines found in left side images to generate a set of 3D polylines. The MR system interpolates between 3D vertices included in the 3D polylines or extrapolates additional 3D vertices, thereby generating a geometric reconstruction of the real-world environment. The MR system may map textures derived from the real-world environment onto the geometric representation faster than the geometric reconstruction is updated.
    Type: Grant
    Filed: July 30, 2020
    Date of Patent: November 28, 2023
    Assignee: Meta Platforms, Inc.
    Inventors: James Allan Booth, Gaurav Chaurasia, Alexandru-Eugen Ichim, Alex Locher, Gioacchino Noris, Alexander Sorkine Hornung, Manuel Werlberger
  • Patent number: 11816782
    Abstract: Systems can identify visible surfaces for pixels in an image (portion) to be rendered. A sampling pattern of ray directions is applied to the pixels, so that the sampling pattern of ray directions repeats, and with respect to any pixel, the same ray direction can be found in the same relative position, with respect to that pixel, as for other pixels. Rays are emitted from visible surfaces in the respective ray direction supplied from the sampling pattern. Ray intersections can cause shaders to execute and contribute results to a sample buffer. With respect to shading of a given pixel, ray results from a selected subset of the pixels are used; the subset is selected by identifying a set of pixels, collectively from which rays were traced for the ray directions in the pattern, and requiring that surfaces from which rays were traced for those pixels satisfy a similarity criteria.
    Type: Grant
    Filed: March 2, 2022
    Date of Patent: November 14, 2023
    Assignee: Imagination Technologies Limited
    Inventors: Gareth Morgan, Luke T. Peterson
  • Patent number: 11804011
    Abstract: Disclosed is a method and apparatus for enabling interactive visualization of three-dimensional volumetric models. The method involves maintaining three-dimensional volumetric models represented by explicit surfaces. In accordance with an embodiment of the disclosure, the method also involves, for a current point of view, generating and displaying images of the volumetric models in a manner that clarifies internal structures by accounting for light attenuation inside the volumetric models as a function of spatial positions of the explicit surfaces. The method also involves, upon receiving user input that adjusts a display variable, repeating the generating and the displaying of the images in accordance with the display variable that has been adjusted, thereby enabling interactive visualization of the volumetric models while simultaneously clarifying the internal structures by accounting for the light attenuation inside the volumetric models.
    Type: Grant
    Filed: September 15, 2021
    Date of Patent: October 31, 2023
    Assignee: LlamaZOO Interactive Inc.
    Inventors: Charles Lavigne, Li Jl
  • Patent number: 11777616
    Abstract: A method and arrangement for testing wireless connections is provided. The method comprises obtaining (500) a three-dimensional model of a given environment; obtaining (502) ray tracing calculations describing propagation of radio frequency signals in the given environment; locating (504) one or more devices in the given environment; determining (506) utilising ray tracing calculations the radio frequency signal properties of one or more devices communicating with the device under test; transmitting (508) control information to the radio frequency controller unit for updating the connections between one or more devices and a set of antenna elements to match with the determined properties; obtaining (510) information on the location and propagation environment of the one or more devices and updating (512) the radio frequency signal properties of the one or more devices if the location or propagation environment changes.
    Type: Grant
    Filed: December 13, 2022
    Date of Patent: October 3, 2023
    Assignee: Nokia Solutions and Networks Oy
    Inventors: Juha Hannula, Marko Koskinen, Petri Koivukangas, Iikka Finning
  • Patent number: 11770495
    Abstract: Systems and methods for generating a virtual view of a virtual camera based on an input image are described. A system for generating a virtual view of a virtual camera based on an input image can include a capturing device including a physical camera and a depth sensor. The system also includes a controller configured to determine an actual pose of the capturing device; determine a desired pose of the virtual camera for showing the virtual view; define an epipolar geometry between the actual pose of the capturing device and the desired pose of the virtual camera; and generate a virtual image depicting objects within the input image according to the desired pose of the virtual camera for the virtual camera based on an epipolar relation between the actual pose of the capturing device, the input image, and the desired pose of the virtual camera.
    Type: Grant
    Filed: August 13, 2021
    Date of Patent: September 26, 2023
    Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Michael Slutsky, Albert Shalumov
  • Patent number: 11744652
    Abstract: Various embodiments of an apparatus, methods, systems and computer program products described herein are directed to Field Visualization Engine. The Field Visualization Engine tracks one or more collimator poses relative to one or more Augmented Reality (AR) headset device poses. Each respective collimator pose and each respective headset device pose corresponds to a three-dimensional (3D) unified coordinate space (“3D space”). The Field Visualization Engine generates an AR representation of a beam emanating from the collimator based at least on a current collimator pose and a current headset device pose. The Field Visualization Engine further generates an AR visualization of emanation of the beam throughout an AR display of medical data.
    Type: Grant
    Filed: July 22, 2022
    Date of Patent: September 5, 2023
    Assignee: Medivis, Inc.
    Inventors: Long Qian, Christopher Morley, Osamah Choudhry
  • Patent number: 11733041
    Abstract: An apparatus and method are presented comprising one or more sensors or cameras configured to rotate about a central motor. In some examples, the motor is configured to travel at a constant linear speed while the one or more cameras face downward and collect a set of images in a predetermined region of interest. The apparatus and method are configured for image acquisition with non-sequential image overlap. The apparatus and method are configured to eliminate gaps in image detection for fault-proof collection of imagery for an underwater survey. In some examples, long baseline (LBL) is utilized for mapping detected images to a location. In some examples, ultra-short baseline (USBL) is utilized for mapping detected images to a location. The apparatus and method are configured to utilize a simultaneous localization and mapping (SLAM) approach.
    Type: Grant
    Filed: August 2, 2021
    Date of Patent: August 22, 2023
    Assignee: University of New Hampshire
    Inventor: Yuri Rzhanov
  • Patent number: 11722768
    Abstract: A method and an apparatus for controlling a camera, and a medium and an electronic device are disclosed. The method includes: acquiring head portrait information of a teacher in a video frame image of a live classroom in real time; analyzing the head portrait information to acquire organ identification information of each organ, wherein the organ identification information is used to indicate whether the organ exists; determining an orientation type of a face in the head portrait information based on the organ identification information, wherein the orientation type comprises a forward type, a lateral type, and a backward type; controlling the camera to focus on the teacher in response to the orientation type being the forward type; and controlling the camera to focus on a blackboard in response to the orientation type being the backward type or the lateral type.
    Type: Grant
    Filed: July 29, 2021
    Date of Patent: August 8, 2023
    Assignee: BEIJING AMBOW SHENGYING EDUCATION AND TECHNOLOGY CO., LTD.
    Inventors: Jin Huang, Gang Huang, Kesheng Wang, Yin Yao, Qiaoling Xu
  • Patent number: 11721048
    Abstract: The present disclosure relates to an imaging processing apparatus and method by which degradation of the quality due to two-dimensional projection of 3D data can be suppressed. All pieces of data for each position included in 3D data representative of a three-dimensional structure are projected to a two-dimensional plane of plural layers. Further, all pieces of data for each position of 3D data projected to a two-dimensional plane having the number of layers indicated by layer number information are projected to a three-dimensional space. The present disclosure can be applied, for example, to an information processing apparatus, an image processing apparatus, electronic equipment, an information processing method, and a program.
    Type: Grant
    Filed: January 7, 2019
    Date of Patent: August 8, 2023
    Assignee: SONY CORPORATION
    Inventors: Ohji Nakagami, Koji Yano, Satoru Kuma, Tsuyoshi Kato
  • Patent number: 11692844
    Abstract: A display apparatus for a vehicle includes: a controller configured to create map information; and a display device configured to display the map information created by the controller, wherein the controller controls the display device to display a path guidance texture based on a road shape when guiding a path among the map information.
    Type: Grant
    Filed: August 25, 2020
    Date of Patent: July 4, 2023
    Assignees: Hyundai Motor Company, Kia Motors Corporation
    Inventors: Su Jin Kwon, Bum Hee Chung, Paul Choo
  • Patent number: 11677925
    Abstract: An information processing apparatus, which transmits, to an image processing apparatus for generating a virtual viewpoint image, at least some of a plurality of images based on image capturing from a plurality of different directions, obtains an image based on image capturing by an image capturing apparatus, obtains camera viewpoint information about at least one of a position and orientation of the image capturing apparatus, obtains virtual viewpoint information about at least one of a position and orientation of the virtual viewpoint, reduces an information amount of the obtained image based on the camera viewpoint information and the virtual viewpoint information, and transmits the image with the reduced information amount to the image processing apparatus.
    Type: Grant
    Filed: August 27, 2021
    Date of Patent: June 13, 2023
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Daichi Adachi
  • Patent number: 11670039
    Abstract: Bordering pixels delineating a texture hole region are identified in a target image. Depth values of the bordering pixels are automatically clustered into two depth value clusters. A specific estimation direction is selected from multiple candidate estimation directions for a texture hole pixel in a texture hole region. A depth value of the texture hole pixel is estimated by interpolating depth values of two bordering background pixels in the specific estimation direction. The estimated depth value is used to warp the texture hole pixel into a reference view represented by a temporal reference image. A pixel value of the texture hole pixel is predicted based on a reference pixel value of a reference pixel from the reference image to which the texture hole pixel is warped using the estimated depth value.
    Type: Grant
    Filed: March 4, 2020
    Date of Patent: June 6, 2023
    Assignee: DOLBY LABORATORIES LICENSING CORPORATION
    Inventors: Wenhui Jia, Haricharan Lakshman, Ajit Ninan
  • Patent number: 11625896
    Abstract: A face modeling method and apparatus, an electronic device and a computer-readable medium. Said method comprises: acquiring multiple depth images, the multiple depth images being obtained by photographing a target face at different irradiation angles; performing alignment processing on the multiple depth images to obtain a target point cloud image; using the target point cloud image to construct a three-dimensional model of the target face. The present disclosure alleviates the technical problems of poor robustness and low precision of the three-dimensional model constructed according to the three-dimensional model constructing method.
    Type: Grant
    Filed: August 9, 2019
    Date of Patent: April 11, 2023
    Assignee: BEIJING KUANGSHI TECHNOLOGY CO., LTD.
    Inventors: Liang Qiao, Keqing Chen, Haibin Huang
  • Patent number: 11605184
    Abstract: A method of mapping 3D point cloud data into 2D surfaces for further efficient temporal coding is described herein. Point cloud global tetris packing utilizes 3D surface patches to represent point clouds and performs temporally consistent global mapping of 3D patch surface data into 2D canvas images.
    Type: Grant
    Filed: July 26, 2021
    Date of Patent: March 14, 2023
    Assignee: SONY CORPORATION
    Inventor: Danillo Graziosi
  • Patent number: 11593995
    Abstract: Various implementations disclosed herein include devices, systems, and methods for generating variations of an object. In various implementations, a device includes a display, a non-transitory memory and one or more processors coupled with the display and the non-transitory memory. In some implementations, a method includes obtaining a request to populate an environment with variations of an object characterized by at least one visual property. In some implementations, the method includes generating the variations of the object by assigning corresponding values for the at least one visual property based on one or more distribution criterion. In some implementations, the method includes displaying the variations of the object in the setting in order to satisfy a presentation criterion.
    Type: Grant
    Filed: January 22, 2021
    Date of Patent: February 28, 2023
    Assignee: APPLE INC.
    Inventors: Stuart Hari Ferguson, Richard Ignatius Punsal Lozada, James Graham McCarter
  • Patent number: 11561651
    Abstract: A method and an apparatus for implementing a virtual paintbrush are provided. The method includes: acquiring a real shooting scene by a camera of a terminal device; forming, based on an operation performed on the terminal device, a handwriting area of the virtual paintbrush in the real shooting scene; and forming handwriting of the virtual paintbrush based on the handwriting area, where the handwriting is fused with the real shooting scene and a fused image is displayed on the terminal device.
    Type: Grant
    Filed: October 26, 2021
    Date of Patent: January 24, 2023
    Assignee: BEIJING BYTEDANCE NETWORK TECHNOLOGY CO., LTD.
    Inventor: Yi Chen
  • Patent number: 11556584
    Abstract: There is disclosed a system, apparatus and methods for optimizing photo selection. When a photographer takes photos as requested in a shot list, the photos are automatically assigned a quality score which correlates to how prominently the photo would be displayed in an online search. The photos and the quality scores are displayed to the photographer so that when the photographer has shot a sufficiently high quality photo then the photographer can stop shooting. Photos with the highest quality scores are optimal. The shot lists include reference photos, and if a new photo has a higher quality score than the corresponding reference photo, the new photo becomes the reference photo.
    Type: Grant
    Filed: March 31, 2022
    Date of Patent: January 17, 2023
    Assignee: Aircam Inc.
    Inventors: Evan Rifkin, Ryan Rifkin, David Hopkins, Jonathan Angelo, Marcus Buffet
  • Patent number: 11556220
    Abstract: Embodiments of a 3D web interaction system are disclosed that allow a user to select a content item from a browser, displayed in an artificial reality environment, and present a corresponding version of the content item in the artificial reality environment. The 3D web interaction system can create the version of the selected content item in different ways depending on whether the selected content item is associated with 3D content and, if so, the type of the associated 3D content. For example, the 3D web interaction system can create and present different versions of the selected content item depending on whether the selected content item is (a) not associated with 3D content, (b) associated with “environment content,” or (c) associated with one or more 3D models.
    Type: Grant
    Filed: July 6, 2021
    Date of Patent: January 17, 2023
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Joshua Jacob Inch, Reilly Donovan, Diana Min Liao, Justin Rogers
  • Patent number: 11558598
    Abstract: A control apparatus which controls a virtual camera according to a user operation related to an operation of the virtual camera, when the control apparatus accepts the user operation, determines whether or not to restrict moving of the virtual camera according to the accepted user operation depending on whether or not a predetermined condition for the virtual camera is fulfilled.
    Type: Grant
    Filed: February 23, 2021
    Date of Patent: January 17, 2023
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Tomohiro Yano
  • Patent number: 11553123
    Abstract: Techniques in connection with a light field camera array are disclosed, involving generating a temperature data for an imaging camera included in an imaging camera array for a first time, obtaining an image data from the imaging camera, generating temperature-based correction parameters corresponding to the temperature data based on at least a stored temperature calibration data; and producing corrected image data by applying a geometric distortion correction and/or color correction indicated by the temperature-based correction parameters to the image data.
    Type: Grant
    Filed: July 18, 2019
    Date of Patent: January 10, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventor: Ross Garrett Cutler
  • Patent number: 11551402
    Abstract: A computer-implemented method is provided for visualizing multiple objects in a computerized visual environment. The method includes displaying to a user a virtual three-dimensional space via a viewing device worn by the user, and determining a data limit of the viewing device for object rendering. The method includes presenting an initial rendering of the objects within the virtual space, where the visualization data used for the initial rendering does not exceed the data limit of the viewing device. The method also includes tracking user attention relative to the objects as the user navigates through the virtual space and determining, based on the tracking of user attention, one or more select objects from the multiple objects to which the user is paying attention. The one or more select objects are located within a viewing range of the user.
    Type: Grant
    Filed: July 20, 2021
    Date of Patent: January 10, 2023
    Assignee: FMR LLC
    Inventors: David Martin, Adam Schouela, Jason Mcevoy
  • Patent number: 11544894
    Abstract: A method includes the steps of receiving training data comprising images of an object and associated camera poses from which the images are captured, training, based on the training data, a machine-learning model to take as input a given viewpoint and synthesize an image of a virtual representation of the object viewed from the given viewpoint, generating, for each of predetermined viewpoints surrounding the virtual representation of the object, a view-dependent image of the object as viewed from that viewpoint using the trained machine-learning model, receiving, from a client device, a desired viewpoint from which to view the virtual representation of the object, selecting one or more of the predetermined viewpoints based on the desired viewpoint, and sending, to the client device, the view-dependent images associated with the selected one or more viewpoints for rendering an output image of the virtual representation of the object viewed from the desired viewpoint.
    Type: Grant
    Filed: February 26, 2021
    Date of Patent: January 3, 2023
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Brian Funt, Reza Nourai, Volga Aksoy, Zeyar Htet
  • Patent number: 11544778
    Abstract: The disclosure extends to methods, systems, and computer program products for producing financial goal planning having two dimensional and three dimensional graphical representations for financial goals.
    Type: Grant
    Filed: September 9, 2014
    Date of Patent: January 3, 2023
    Assignee: MX TECHNOLOGIES, INC.
    Inventors: John Ryan Caldwell, Ronald Brennan Knotts, Jonathan R. Hopkins
  • Patent number: 11543551
    Abstract: Disclosed are methods of marine 3D seismic data acquisition that do not require compensation for winds and currents.
    Type: Grant
    Filed: March 11, 2019
    Date of Patent: January 3, 2023
    Assignee: SHEARWATER GEOSERVICES SOFTWARE INC.
    Inventors: Peter M. Eick, Joel D. Brewer, Charles Clinton Mosher
  • Patent number: 11521351
    Abstract: In an example, a method includes acquiring, at a processor, a data model of an object to be generated in additive manufacturing, the data model comprising object model data representing a slice of the object model as a plurality of polygons and object property data comprising property data associated with the plurality of polygons. The slice may be inspected from a predetermined perspective at a plurality of discrete locations. It may be determined if each location is within a face of a polygon, and if so, the object property data associated with that polygon may be identified and associated with that location. The slice may further be inspected at a plurality of discrete locations along an edge of a polygon, the object property data associated with each location may be identified and associated with that location.
    Type: Grant
    Filed: July 10, 2017
    Date of Patent: December 6, 2022
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Josh Shepherd, Matthew A Shepherd