Three-dimension Patents (Class 345/419)
  • Patent number: 11328477
    Abstract: The image processing apparatus includes: a shape generation unit configured to generate data indicating a schematic shape of an object; a shape decomposition unit configured to decompose the data indicating the schematic shape of the object into a plurality of pieces of partial data in accordance with a shape of a cross section of the schematic shape of the object; and a shape fitting unit configured to fit a corresponding basic shape for each piece of the partial data, and generates three-dimensional shape data on the object based on the fitted basic shape.
    Type: Grant
    Filed: May 13, 2020
    Date of Patent: May 10, 2022
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Tomoyori Iwao
  • Patent number: 11325520
    Abstract: A transmittance estimation unit estimates a transmittance for each of regions from a captured image. The transmittance estimation unit estimates the transmittance for each pixel, using, for example, dark channel processing. A transmittance detection unit detects a transmittance of haze at the time of imaging of the captured image, using the transmittance estimated for each pixel by the transmittance estimation unit and depth information for each pixel. The transmittance detection unit detects the transmittance of the haze on the basis of, for example, a logarithm average value of the transmittances in the whole or a predetermined portion of the captured image, and an average value of depths indicated by the depth information.
    Type: Grant
    Filed: July 12, 2018
    Date of Patent: May 10, 2022
    Assignee: SONY CORPORATION
    Inventors: Satoshi Kawata, Kazunori Kamio, Yuki Tokizaki
  • Patent number: 11328480
    Abstract: Aspects described herein relate to three-dimensional (3D) characters and rapidly generating 3D characters from a plurality of 3D source models. In generating the 3D characters, one or more 3D source models may be standardized, applied to a base mesh with material ID assignments, and decomposed to isolate particular polygonal mesh pieces and/or texture map feature selections denoted by the material IDs of the base mesh. The disparate isolated polygonal mesh pieces and/or texture map feature selections may be assembled in a modular fashion to compose unique 3D characters unlike any of those of the one or more 3D models. The 3D characters may be further refined through the addition of features and/or accessories, and may also be processed through machine learning algorithms to further ascertain uniqueness.
    Type: Grant
    Filed: November 11, 2020
    Date of Patent: May 10, 2022
    Assignee: Radical Convergence Inc.
    Inventors: Charles Levi Metze, III, Harold Benjamin Helmich, Jacob Nathaniel Fisher, Jeffrey Michael Olson
  • Patent number: 11328489
    Abstract: There is disclosed an augmented reality user interface including dual representation of a physical location including generating two views for viewing the augmented reality objects, a first view includes the video data of the view including the augmented reality objects superimposed thereover in augmented reality locations and a second view that includes data derived from the physical location to generate a map with the augmented reality objects from the first view visible as objects on the map in the augmented reality locations, combining the location, the motion data, the video data, and the augmented reality objects into an augmented reality video such that when the computing device is in a first position, the first view is visible and when the computing device is in a second position, the second view is visible, and displaying the augmented reality video on a display.
    Type: Grant
    Filed: June 30, 2020
    Date of Patent: May 10, 2022
    Assignee: Inspirium Laboratories LLC
    Inventor: Iegor Antypov
  • Patent number: 11328440
    Abstract: Disclosed herein is a method of transmitting point cloud data. The method may include acquiring point cloud data, encoding geometry information including positions of points of the point cloud data, generating one or more LODs based on the geometry information and selecting one or more neighbor points of each point to be attribute-encoded based on the one or more LODs, wherein the selected one or more neighbor points of each point are located within a maximum neighbor point distance, encoding attribute information of each point based on the selected one or more neighbor points of each point, and transmitting the encoded geometry information, the encoded attribute information, and signaling information.
    Type: Grant
    Filed: January 25, 2021
    Date of Patent: May 10, 2022
    Assignee: LG Electronics Inc.
    Inventors: Hyejung Hur, Sejin Oh
  • Patent number: 11330248
    Abstract: An information processing apparatus includes a setting unit configured to set a movement parameter indicating a relationship between a movement amount in a real space of an input apparatus used for moving a virtual viewpoint corresponding to a virtual viewpoint image, and a movement amount of a virtual viewpoint in a virtual space, based on a user operation, and a movement control unit configured to move, in accordance with a movement of the input apparatus, the virtual viewpoint by a movement amount in the virtual space that is determined based on the movement parameter set by the setting unit and a movement amount in the real space of the input apparatus.
    Type: Grant
    Filed: February 21, 2020
    Date of Patent: May 10, 2022
    Assignee: Canon Kabushiki Kaisha
    Inventor: Rei Ishikawa
  • Patent number: 11328493
    Abstract: An augmented reality screen system includes an augmented reality device and a host. The augmented reality device is configured to take a physical mark through a camera. The host is configured to receive the physical mark, determine position information and rotation information of the physical mark, and fetch a virtual image from a storage device through a processor of the host. The processor transmits an adjusted virtual image to the augmented reality device according to the position information and the rotation information, and the augmentation device projects the adjusted virtual image to a display of the augmented reality device. The adjusted virtual image becomes a virtual extended screen, and the virtual extended screen and the physical mark are simultaneously displayed on the display of the augmented reality device.
    Type: Grant
    Filed: February 24, 2021
    Date of Patent: May 10, 2022
    Assignee: ACER INCORPORATED
    Inventors: Huei-Ping Tzeng, Chao-Kuang Yang, Wen-Cheng Hsu, Chih-Wen Huang, Chih-Haw Tan
  • Patent number: 11330246
    Abstract: An imaging system is configured to use an array of time-of-flight (ToF) pixels to determine depth information using the ToF imaging method and/or the stereo imaging method. A light emitting component emits light to illuminate a scene and a light detecting component detects reflected light via the array of ToF pixels. A ToF pixel is configured to determine phase shift data based on a phase shift between the emitted light and the reflected light, as well as intensity data based on an amplitude of the reflected light. Multiple ToF pixels are shared by a single micro-lens. This enables multiple offset images to be generated using the intensity data measured by each ToF pixel. Accordingly, via a configuration in which multiple ToF pixels share a single micro-lens, depth information can be determined using both the ToF imaging method and the stereo imaging method.
    Type: Grant
    Filed: November 21, 2019
    Date of Patent: May 10, 2022
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventor: Minseok Oh
  • Patent number: 11321929
    Abstract: A method and system for enabling a self-localizing mobile device to localize other self-localizing mobile devices having different reference frames is disclosed. Multiple self-localizing mobile devices are configured to survey an environment to generate a three-dimensional map of the environment using simultaneous localization and mapping (SLAM) techniques. The mobile devices are equipped with wireless transceivers, such as Ultra-wideband radios, for measuring distances between the mobile devices using wireless ranging techniques. Based on the measured distances and self-localized positions in the environment corresponding to each measured distance, at least one of the mobile devices is configured to determine relative rotational and translational transformations between the different reference frames of the mobile devices.
    Type: Grant
    Filed: February 27, 2019
    Date of Patent: May 3, 2022
    Assignee: Purdue Research Foundation
    Inventors: Ke Huo, Karthik Ramani
  • Patent number: 11321930
    Abstract: Disclosed herein is a terminal device connected to a server device in a communicable manner. The terminal device includes a position identification section that identifies a position in an actual space, a setting section that sets a virtual space associated with content associated information associated with predetermined content in the actual space, the setting section setting the virtual space in an occupying space while defining a predetermined range in the actual space as the occupying space, and a performance section that receives the content associated information in the virtual space from the server device and performs predetermined information processing.
    Type: Grant
    Filed: October 28, 2020
    Date of Patent: May 3, 2022
    Assignee: SONY INTERACTIVE ENTERTAINMENT INC.
    Inventor: Yoshinori Ohashi
  • Patent number: 11321931
    Abstract: A technology that streams graphical components and rendering instructions to a client device, for the client device to perform the final rendering and overlaying of that content onto the client's video stream based on the client's most recent tracking of the device's position and orientation. A client device sends a request for augmented reality drawing data to a network device. In response, the network device generates augmented reality drawing data, which can be augmented reality change data based on the augmented reality information and previous client render state information, and sends the augmented reality drawing data to the client device. The client device receives the augmented reality drawing data and renders a visible representation of an augmented reality scene comprising overlaying augmented reality graphics over a current video scene obtained from a camera of the client device.
    Type: Grant
    Filed: March 4, 2021
    Date of Patent: May 3, 2022
    Assignee: HOME BOX OFFICE, INC.
    Inventor: Richard Parr
  • Patent number: 11321932
    Abstract: A method for aligning manipulations in time and space to first model of three-dimensional (3D) real-world object in second model of 3D real-world environment said method includes: generating, by first terminal device, third model based on first model and second model, from first point of view; transmitting third model and timing metadata to second terminal device(s); receiving third model and timing metadata at second terminal device(s); manipulating third model by second terminal device(s); creating manipulation information; transmitting manipulation information from second terminal device(s) to first terminal device; receiving manipulation information at first terminal device; updating, by first terminal device, first model and second model from second point of view; and aligning, by first terminal device, manipulation information in time and space.
    Type: Grant
    Filed: May 21, 2021
    Date of Patent: May 3, 2022
    Assignee: Delta Cygni Labs Oy
    Inventors: Sauli Kiviranta, Boris Krassi, Igor Levochkin, Teemu Kumpumäki
  • Patent number: 11321899
    Abstract: Disclosed herein are methods, computer apparatus, and computer programs for creating two-dimensional (2D) image sequences with the appearance of three-dimensional (3D) rotation and depth.
    Type: Grant
    Filed: May 28, 2021
    Date of Patent: May 3, 2022
    Inventor: Alexander Dutch
  • Patent number: 11315309
    Abstract: An apparatus includes: an object data storage section that stores polygon identification data for polygons of an object to be displayed; a reference image data storage section that stores data of reference images each representing an image when a space including the object to be displayed is viewed from one of a plurality of prescribed reference viewing points, and further stores polygon identification data corresponding to each reference image; a viewing point information acquisition section that acquires information relating to a viewing point; a projection section that represents on a plane of a display image the position and shape of an image of the object when the space is viewed from the viewing point; a pixel value determination section that determines the values of pixels constituting the image of the object in the display image, using the values of the pixels representing the same image in one or more of the plurality of reference images; and an output section that outputs the data of the display ima
    Type: Grant
    Filed: December 14, 2018
    Date of Patent: April 26, 2022
    Assignee: SONY INTERACTIVE ENTERTAINMENT INC.
    Inventors: Jason Gordon Doig, Andrew James Bigos
  • Patent number: 11314117
    Abstract: A display assembly, a display device, and a control method thereof are disclosed. The display assembly includes: a polymer dispersed liquid crystal layer; a first electrode layer and a second electrode layer for providing an electric field for the polymer dispersed liquid crystal layer; and a birefringent lens grating that is closer to a display side of the display assembly than the polymer dispersed liquid crystal layer. The birefringent lens grating is configured to transmit collimated light of a first polarization direction emitted from the polymer dispersed liquid crystal layer along an original optical path of the collimated light, and to refract collimated light of a second polarization direction emitted from the polymer dispersed liquid crystal layer to left and right eyes of an user, respectively. The first polarization direction is perpendicular to the second polarization direction.
    Type: Grant
    Filed: April 15, 2019
    Date of Patent: April 26, 2022
    Assignee: BOE TECHNOLOGY GROUP CO., LTD.
    Inventor: Wei Wei
  • Patent number: 11314905
    Abstract: A system and method for generating computerized floor plans is provided. The system comprises a mobile computing device, such as a smart cellular telephone, a tablet computer, etc. having an internal digital gyroscope and camera, and an interior modeling software engine interacts with the gyroscope and camera to allow a user to quickly and conveniently take measurements of interior building features, and to create computerized floor plans of such features from any location within a space, without requiring the user to stay in a single location while taking the measurements. The system presents the user with a graphical user interface that allows a user to quickly and conveniently delineate wall corner features using a reticle displayed within the user interface. As corners are identified, the system processes the corner information and information from the gyroscope to calculate wall features and creates a floor plan of the space with high accuracy.
    Type: Grant
    Filed: February 11, 2015
    Date of Patent: April 26, 2022
    Assignee: Xactware Solutions, Inc.
    Inventors: Bradley McKay Childs, Jeffrey C. Taylor, Jeffery D. Lewis, Corey Reed
  • Patent number: 11314383
    Abstract: According to one embodiment, a method includes obtaining a media composition for display on an electronic display device is obtained. The media composition includes a plurality of layers, with each layer including a visual element. The method also includes selecting at least some of the layers of the media composition to have a parallax effect applied thereto and determining an amount of total parallax effect to apply to the selected layers. Also, the method includes determining an appropriate amount of offset to apply to each of the selected layers on an individual basis and shifting the selected layers in one or more directions by their respective appropriate amounts. Moreover, the method includes displaying the media composition showing the parallax effect on the electronic display device.
    Type: Grant
    Filed: February 24, 2020
    Date of Patent: April 26, 2022
    Assignee: Apple Inc.
    Inventors: Jose J. Aroche Martinez, Zachary P. Petersen, Dale A. Taylor, Leticia M. Alarcon, Dudley G. Wong, Sean M. Harold, Ada Turner, Oscar H. Walden, James C. Wilson
  • Patent number: 11315310
    Abstract: A global illumination data structure (e.g., a data structure created to store global illumination information for geometry within a scene to be rendered) is computed for the scene. Additionally, reservoir-based spatiotemporal importance resampling (RESTIR) is used to perform illumination gathering, utilizing the global illumination data structure. The illumination gathering includes identifying light values for points within the scene, where one or more points are selected within the scene based on the light values in order to perform ray tracing during the rendering of the scene.
    Type: Grant
    Filed: January 19, 2021
    Date of Patent: April 26, 2022
    Assignee: NVIDIA CORPORATION
    Inventors: Christopher Ryan Wyman, Morgan McGuire, Peter Schuyler Shirley, Aaron Eliot Lefohn
  • Patent number: 11315334
    Abstract: A display apparatus including light source(s), camera(s), head-tracking means, and processor configured to: obtain three-dimensional model of real-world environment; control camera(s) to capture given image of real-world environment, whilst processing head-tracking data obtained from head-tracking means to determine pose of users head with respect to which given image is captured; determine region of three-dimensional model that corresponds to said pose of users head; compare plurality of features extracted from region of three-dimensional model with plurality of features extracted from given image, to detect object(s) present in real-world environment; employ environment map of extended-reality environment to generate intermediate extended-reality image based on pose of users head; embed object(s) in intermediate extended-reality image to generate extended-reality image; and display extended-reality image via light source(s).
    Type: Grant
    Filed: February 9, 2021
    Date of Patent: April 26, 2022
    Assignee: Varjo Technologies Oy
    Inventors: Ari Antti Erik Peuhkurinen, Ville Timonen, Niki Dobrev
  • Patent number: 11317235
    Abstract: A method provides binaural sound to a listener while the listener watches a movie so sounds from the movie localize to a location of a character in the movie. Sound is convolved with head related transfer functions (HRTFs) of the listener, and the convolved sound is provided to the listener who wears a wearable electronic device.
    Type: Grant
    Filed: March 10, 2019
    Date of Patent: April 26, 2022
    Inventors: Philip Scott Lyren, Glen A. Norris
  • Patent number: 11317081
    Abstract: Gaze is corrected by adjusting multi-view images of a head. Image patches containing the left and right eyes of the head are identified and a feature vector is derived from plural local image descriptors of the image patch in at least one image of the multi-view images. A displacement vector field representing a transformation of an image patch is derived, using the derived feature vector to look up reference data comprising reference displacement vector fields associated with possible values of the feature vector produced by machine learning. The multi-view images are adjusted by transforming the image patches containing the left and right eyes of the head in accordance with the derived displacement vector field.
    Type: Grant
    Filed: July 23, 2020
    Date of Patent: April 26, 2022
    Assignee: RealD Spark, LLC
    Inventors: Eric Sommerlade, Michael G. Robinson
  • Patent number: 11315329
    Abstract: In one embodiment, a method includes accessing a plurality of points, wherein each point (1) corresponds to a spatial location associated with an observed feature of a physical environment and (2) is associated with a patch representing the observed feature, determining a density associated with each of the plurality of points based on the spatial locations of the plurality of points, scaling the patch associated with each of the plurality of points based on the density associated with the point, and reconstructing a scene of the physical environment based on at least the scaled patches.
    Type: Grant
    Filed: February 25, 2020
    Date of Patent: April 26, 2022
    Assignee: Facebook Technologies, LLC.
    Inventors: Alexander Sorkine Hornung, Alessia Marra, Fabian Langguth, Matthew James Alderman
  • Patent number: 11315277
    Abstract: A device and a method of using the device to determine a user-specific head-related transfer function (HRTF), are described. The device can determine first geometric data corresponding to visible features of a pinna of a user in an image, and second geometric data corresponding to hidden features of the pinna obfuscated by the visible features in the image. The first geometric data and the second geometric data are combined in a geometric model that describes a shape of the pinna, and the user-specific HRTF is determined based on the geometric model. The user-specific HRTF is used to render spatial audio to the user. Other aspects are also described and claimed.
    Type: Grant
    Filed: September 4, 2019
    Date of Patent: April 26, 2022
    Assignee: APPLE INC.
    Inventors: Peter Victor Jupin, Yacine Azmi, Martin E. Johnson, Darius A. Satongar, Jonathan D. Sheaffer
  • Patent number: 11308638
    Abstract: A depth estimation method includes: taking a single image as a first image in binocular images, and obtaining a second image in the binocular images based on the first image via a first neural network; and obtaining depth information corresponding to the first image via a second neural network by performing binocular stereo matching on the first image and the second image.
    Type: Grant
    Filed: March 31, 2020
    Date of Patent: April 19, 2022
    Assignee: SHENZHEN SENSETIME TECHNOLOGY CO., LTD.
    Inventors: Yue Luo, Sijie Ren
  • Patent number: 11305463
    Abstract: A process of 3D scanning lactating women's breasts to generate an AutoCAD model of the maternal nipple is disclosed. The 3D scanning and generation of a plurality of maternal nipple shapes for creation of breastfeeding accessories and molds is intended to closely mimic a specific mother's unique nipple shape, which can vary widely from one woman to another. The embodiments eliminate nipple confusion in infants being introduced to a bottle nipple and pacifier, in order to promote prolonged breastfeeding. Mimicking a mother's unique nipple shape helps create accessories that better fit a mother's unique nipple size and shape and decrease pain (i.e. pump flange and nipple shield).
    Type: Grant
    Filed: December 5, 2018
    Date of Patent: April 19, 2022
    Assignee: The Natural Nipple Corp.
    Inventor: Lauren Wright
  • Patent number: 11308706
    Abstract: Systems and methods for local augmented reality (AR) tracking of an AR object are disclosed. In one example embodiment a device captures a series of video image frames. A user input is received at the device associating a first portion of a first image of the video image frames with an AR sticker object and a target. A first target template is generated to track the target across frames of the video image frames. In some embodiments, global tracking based on a determination that the target is outside a boundary area is used. The global tracking comprises using a global tracking template for tracking movement in the video image frames captured following the determination that the target is outside the boundary area. When the global tracking determines that the target is within the boundary area, local tracking is resumed along with presentation of the AR sticker object on an output display of the device.
    Type: Grant
    Filed: July 13, 2020
    Date of Patent: April 19, 2022
    Assignee: Snap Inc.
    Inventors: Jia Li, Linjie Luo, Rahul Bhupendra Sheth, Ning Xu, Jianchao Yang
  • Patent number: 11308693
    Abstract: A method of edge loop selection includes accessing a polygon mesh; receiving a selection of a first edge connected to a first non-four-way intersection vertex; receiving, after receiving the selection of the first edge, a selection of a second edge connected to the first non-four-way intersection vertex; in response to receiving a command invoking an edge loop selection process: evaluating a topological relationship between the first edge and the second edge; determining a rule for processing a non-four-way intersection vertex based on the topological relationship between the first edge and the second edge; and completing an edge loop by, from the second edge, processing each respective four-way intersection vertex by choosing a middle edge as a next edge at the respective four-way intersection vertex, and processing each respective non-four-way intersection vertex based on the rule.
    Type: Grant
    Filed: July 16, 2020
    Date of Patent: April 19, 2022
    Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.
    Inventor: Colette Mullenhoff
  • Patent number: 11308692
    Abstract: Embodiments of the present disclosure relates to a method and device for processing an image, and a nonvolatile storage medium. The method includes: acquiring a first image and a first virtual object having a corresponding relationship with a preset standard model, the first image includes a target object; determining control information based on the target object and the standard model; obtaining a second virtual object matched with the target object by processing the first virtual object based on the control information; and generating a second image; the second image includes the target object fitted with the second virtual object.
    Type: Grant
    Filed: June 15, 2020
    Date of Patent: April 19, 2022
    Assignee: Beijing Dajia Internet Information Technology Co., Ltd.
    Inventor: Liqian Ma
  • Patent number: 11308673
    Abstract: Systems and methods for using three-dimensional scans of a physical subject to determine positions and/or orientations of skeletal joints in the rigging for a virtual character. At least one articulation segment of a polygon mesh for the virtual character may be determined. The articulation segment may include a subset of vertices in the polygon mesh. An indicator of the position or orientation of the articulation segment of the polygon mesh may be determined. Based on the indicator of the position or orientation of the articulation segment, the position or orientation of at least one joint for deforming the polygon mesh may be determined.
    Type: Grant
    Filed: May 1, 2019
    Date of Patent: April 19, 2022
    Assignee: Magic Leap, Inc.
    Inventor: Sean Michael Comer
  • Patent number: 11308675
    Abstract: Techniques related to capturing 3D faces using image and temporal tracking neural networks and modifying output video using the captured 3D faces are discussed. Such techniques include applying a first neural network to an input vector corresponding to a first video image having a representation of a human face to generate a morphable model parameter vector, applying a second neural network to an input vector corresponding to a first and second temporally subsequent to generate a morphable model parameter delta vector, generating a 3D face model of the human face using the morphable model parameter vector and the morphable model parameter delta vector, and generating output video using the 3D face model.
    Type: Grant
    Filed: June 14, 2018
    Date of Patent: April 19, 2022
    Assignee: Intel Corporation
    Inventors: Shandong Wang, Ming Lu, Anbang Yao, Yurong Chen
  • Patent number: 11304761
    Abstract: This invention is a system and method for utilizing artificial intelligence to operate a surgical robot (e.g., to perform a laminectomy), including a surgical robot, an artificial intelligence guidance system, an image recognition system, an image recognition database, and a database of past procedures with sensor data, electronic medical records, and imaging data. The image recognition system may identify the tissue type present in the patient and if it is the desired tissue type, the AI guidance system may remove a layer of that tissue with the end effector on the surgical robot, and have the surgeon define the tissue type if the image recognition system identified the tissue as anything other than the desired tissue type.
    Type: Grant
    Filed: November 13, 2020
    Date of Patent: April 19, 2022
    Assignee: Intuitive Surgical Operations, Inc.
    Inventors: Jeffrey Roh, Justin Esterberg
  • Patent number: 11310483
    Abstract: An HMD includes a six-axis sensor, a magnetic sensor, and a head motion detecting section that detect at least one of the position and motion of a head, a reference setting section that sets a reference state based on at least one of the head position and motion detected by the head motion detecting section, and a display controlling section that changes the display state of a content displayed by the display section based on changes in the position and motion of the head with respect to those in the reference state. The content is formed of a plurality of contents, and the display controlling section changes the display state of the contents displayed by an image display section in such a way that the relative display positions of the contents are maintained.
    Type: Grant
    Filed: August 12, 2020
    Date of Patent: April 19, 2022
    Assignee: SEIKO EPSON CORPORATION
    Inventor: Shinichi Kobayashi
  • Patent number: 11308694
    Abstract: There is provided an image processing apparatus which includes a voice recognition section that recognizes a voice uttered by a user, a motion recognition section that recognizes a motion of the user, a text object control section that disposes an object of text representative of the contents of the voice in a three-dimensional virtual space, and varies text by implementing interaction based on the motion, and an image generation section that displays an image with the three-dimensional virtual space projected thereon.
    Type: Grant
    Filed: June 15, 2020
    Date of Patent: April 19, 2022
    Assignee: SONY INTERACTIVE ENTERTAINMENT INC.
    Inventor: Masashi Nakata
  • Patent number: 11308627
    Abstract: A device, method, and non-transitory computer readable medium for interactive 3D visualization of ultrasound images of a supraspinatus tendon injury. Ultrasound images are acquired of a region in which the supraspinatus tendon injury is suspected. The ultrasound images are preprocessed, and energy of the preprocessed ultrasound images is minimized. A set of supraspinatus tendon images are extracted from low energy preprocessed ultrasound images. A morphological operation is performed on the set of supraspinatus tendon images to generate a smoothed set of supraspinatus tendon images. A binary mask is applied to the smoothed set of supraspinatus tendon images to detect boundary points of the supraspinatus tendon and generate a set of segmented image frames. The set of segmented image frames are arranged based on spatial position of each segmented image frame with respect to the supraspinatus tendon. A 3D representation of the supraspinatus tendon is reconstructed and rendered on a display.
    Type: Grant
    Filed: September 17, 2021
    Date of Patent: April 19, 2022
    Assignee: King Abdulaziz University
    Inventors: Mohammed U. Alsaggaf, Irraivan Elamvazuthi, Ubaid M. Al-Saggaf, Muhammad Moinuddin
  • Patent number: 11304661
    Abstract: In an exemplary embodiment, a tomography device comprises a scanner that obtains image slices. The device additionally comprises at least one processor configured to: perform a Hermetic Transform on the image slices to obtain hermetically transformed data using; filter and perform an Inverse Hermetic Transform on the Hermetic Transform data to obtain filtered inverse Hermetic Transform data; and perform back projection and angle integration on the filtered inverse Hermetic Transform data.
    Type: Grant
    Filed: October 23, 2015
    Date of Patent: April 19, 2022
    Assignee: VERTOCOMM, INC.
    Inventors: Harvey C. Woodsum, Christopher M. Woodsum
  • Patent number: 11310459
    Abstract: An image capturing device, image capturing system, and image processing method, each of which: obtains a video image of an object; converts a wide-angle video image to generate a low-definition, wide-angle image; applies projection transformation to a part of the wide-angle video image to generate a high-definition, narrow-angle video image in different projection; combines each frame of the low-definition, wide-angle video image and a corresponding frame of the high-definition, narrow-angle video image, into one frame data while reducing a resolution of each video image, to generate a combined video image; transmits the combined video image for display at a communication terminal; in response to a request from the communication terminal, applies projection transformation to a part of a frame of the wide-angle video image to generate an ultra-high-definition, narrow-angle still image in different projection; and transmits the ultra-high-definition, narrow-angle still image for display at the communication ter
    Type: Grant
    Filed: December 14, 2020
    Date of Patent: April 19, 2022
    Assignee: RICOH COMPANY, LTD.
    Inventors: Takamichi Katoh, Yoshinaga Kato
  • Patent number: 11307412
    Abstract: System, method, and non-transitory computer readable medium for presenting audio-based visual overlays on see-through optical assemblies. Overlays are presented by capturing, via a camera of an eyewear device, initial images of a scene, receiving an audio signal, modifying the initial images responsive to the audio signal to create overlay images, and displaying, via a see-through optical assembly of the eyewear device, the overlay images to a wearer of the eyewear device over the scene in a viewing area of the eyewear device.
    Type: Grant
    Filed: December 30, 2019
    Date of Patent: April 19, 2022
    Assignee: Snap Inc.
    Inventor: David Meisenholder
  • Patent number: 11302073
    Abstract: Method for texturing a 3D model of at least one scene (5), comprising: a) the meshing with surface elements (50; 55) of a point cloud (45) representing the scene, so as to generate the 3D model, each surface element representing an area of the scene, b) the unfolding of the 3D model for obtaining a 2D model formed of a plane mesh (60a; 60b) formed of polygons (65), each surface element corresponding to a single polygon, and vice versa, and c) for at least one, preferably all the surface elements, iv) the identification, from an image bank (40a; 40b), of the images representing the area of the scene and which have been acquired by a camera the image plane (72a-b) of which has a normal direction, in the corresponding acquisition position, forming an angle (?a-b) less than 10°, preferably less than 5°, better less than 3° with a direction normal (70) to the face of the surface element, v) the selection of an image (40a-b) from the identified images, and, vi) the association of a texture property with a corresp
    Type: Grant
    Filed: October 3, 2019
    Date of Patent: April 12, 2022
    Assignee: SOLETANCHE FREYSSINET
    Inventors: Guy Perazio, Jose Peral, Serge Valcke, Luc Chambaud
  • Patent number: 11302063
    Abstract: A 3D conversation system can facilitate 3D conversations in an augmented reality environment, allowing conversation participants to appear as if they are face-to-face. The 3D conversation system can accomplish this with a pipeline of data processing stages, which can include calibrate, capture, tag and filter, compress, decompress, reconstruct, render, and display stages. Generally, the pipeline can capture images of the sending user, create intermediate representations, transform the representations to convert from the orientation the images were taken from to a viewpoint of the receiving user, and output images of the sending user, from the viewpoint of the receiving user, in synchronization with audio captured from the sending user. Such a 3D conversation can take place between two or more sender/receiving systems and, in some implementations can be mediated by one or more server systems. In various configurations, stages of the pipeline can be customized based on a conversation context.
    Type: Grant
    Filed: July 21, 2020
    Date of Patent: April 12, 2022
    Assignee: Facebook Technologies, LLC
    Inventors: Brian Keith Cabral, Albert Parra Pozo
  • Patent number: 11302070
    Abstract: Disclosed is a system for efficiently accessing a point cloud via a multi-tree deconstruction of the point cloud. The system may receive the point cloud, may differentiate different sets of data points from the point cloud using differentiation criteria, and may generate different trees with each tree having leaf nodes corresponding to one of the differentiated sets of data points and parent nodes defined according to commonality in values of two or more leaf nodes. The system may receive a request to render the 3D environment, load a first tree into memory, generate a first image from the first tree data points, flush the first tree from the memory, load a second tree into the memory, generate a second image from the second tree data points, and present a composite image by combining at least the first image with the second image.
    Type: Grant
    Filed: December 15, 2021
    Date of Patent: April 12, 2022
    Assignee: Illuscio, Inc.
    Inventors: Joseph Bogacz, Robert Monaghan
  • Patent number: 11298619
    Abstract: An information processing system for controlling movements of a character (200) in a virtual three-dimensional space, comprising movement control unit (231) that controls the movements of the character (200), and a switching determination unit (232) that determines switching of the movement of the character (200) by the movement control unit (231) between a three-dimensional movement in the virtual three-dimensional space and a movement in a predetermined surface (211) provided in the virtual three-dimensional space. The movement control unit (231) determines a speed of the character (200) after the switching on the basis of a speed of the character (200) before the switching when the switching is performed between the three-dimensional movement in the virtual three-dimensional space and the movement in the predetermined surface (211).
    Type: Grant
    Filed: September 28, 2020
    Date of Patent: April 12, 2022
    Assignee: NINTENDO CO., LTD.
    Inventors: Wataru Tanaka, Kenta Motokura
  • Patent number: 11298050
    Abstract: A posture estimation device includes: a processor that: obtains detected information indicating a feature of a subject, wherein the feature is detected based on an image that is captured by an imager from a position where the subject is viewed from above, calculates a feature amount based on the obtained detected information, updates, based on a geometric relationship between the imager and the subject, a model parameter for estimating a posture of the subject by machine learning using the calculated feature amount in a time series, and estimates the posture of the subject using the updated model parameter.
    Type: Grant
    Filed: January 23, 2020
    Date of Patent: April 12, 2022
    Assignee: Konica Minolta, Inc.
    Inventor: Naoki Ikeda
  • Patent number: 11303807
    Abstract: A method for generating a surround view (SV) image for display in a view port of an SV processing system is provided that includes capturing, by at least one processor, corresponding images of video streams from each camera of a plurality of cameras and generating, by the at least one processor, the SV image using ray tracing for lens remapping to identify coordinates of pixels in the images corresponding to pixels in the SV image.
    Type: Grant
    Filed: June 29, 2021
    Date of Patent: April 12, 2022
    Assignee: Texas Instruments Incorporated
    Inventors: Hemant Vijay Kumar Hariyani, Martin Fritz Mueller, Vikram Vijayanbabu Appia
  • Patent number: 11298617
    Abstract: A non-transitory computer-readable recording medium having computer-readable instructions stored thereon, which when executed, causes a computer to execute a process of a game program, is provided. The process includes a step (a) of setting a default position of a virtual camera; a step (b) of calculating an outer edge of multiple objects arranged in a given region in the virtual space; a step (c) of determining a target position in an interior of the outer edge, to which the virtual camera is directed; a step (d) of adjusting a height of the virtual camera at the default position so as to display all of the plurality of objects in the region when the virtual camera is directed from the default position to the target position; and a step (e) of operating the virtual camera to be directed to the target position from the adjusted height of the default position.
    Type: Grant
    Filed: November 13, 2020
    Date of Patent: April 12, 2022
    Assignee: KOEI TECMO GAMES CO., LTD.
    Inventor: Masayuki Kuriyama
  • Patent number: 11302069
    Abstract: In one embodiment, a method is for rendering medical volumetric images from received volumetric data, using a cinematic rendering approach, based on a Monte Carlo path tracing algorithm (MCPT). The MCPT algorithm uses at least one microfacet-based bidirectional reflectance distribution function (BRDF) for computing a probability how light is reflected at an implicit surface which is used for shading the implicit surface. In one embodiment, the method includes detecting if a surface scatter event is triggered. If yes, the method includes modifying the computation of a local gradient in the BRDF by perturbing the respective received volumetric data by applying a noise function for simulating a roughness of the implicit surface; and shading the implicit surfaces for rendering the received volumetric data.
    Type: Grant
    Filed: September 10, 2020
    Date of Patent: April 12, 2022
    Assignee: SIEMENS HEALTHCARE GMBH
    Inventor: Klaus Engel
  • Patent number: 11303923
    Abstract: Embodiments are generally directed to affine motion compensation for current picture referencing. An embodiment of an apparatus includes one or more processors for processing of data; a memory for storage of data including video data; and an encoder for encoding of video data to generate encoded video data, wherein the encoder includes a component to provide affine motion compensation for current picture references in the video data.
    Type: Grant
    Filed: June 11, 2019
    Date of Patent: April 12, 2022
    Assignee: INTEL CORPORATION
    Inventors: Jill Boyce, Zhipin Deng, Lidong Xu
  • Patent number: 11302052
    Abstract: An aspect provides a computer-implemented method for constructing evaluation logic associated with an animation software package. The method comprises receiving at least one software module, the at least one software module including at least one evaluator; writing the at least one software module to at least one executable code object; and maintaining data for the at least one software module in a contiguous block of memory for use by the software module.
    Type: Grant
    Filed: July 19, 2021
    Date of Patent: April 12, 2022
    Assignee: WETA DIGITAL LIMITED
    Inventors: Niall J. Lenihan, Richard Chi Lei, Sander van der Steen
  • Patent number: 11302034
    Abstract: A property inspection service hosted on a web-based server system. The server system receives a list of physical addresses each corresponding to different parcels. For each address, the server system obtains multiple images including overhead images and perspective view images. A first trained model analyzes selected overhead images individually to identify one or more building structures. A second trained model analyzes selected perspective view images individually to identify a primary building structure. A third trained model analyzes the selected overhead images and the perspective view images together in an integrated approach to identify attributes associated with identified building structure. A digital report is generated as a graphical user interface configured to display the selected images and the attributes associated the identified building structure.
    Type: Grant
    Filed: July 9, 2020
    Date of Patent: April 12, 2022
    Assignee: Tensorflight, Inc.
    Inventors: Robert Kozikowski, Zbigniew Wojna, Lukasz Jocz, Rafal Tybor, Robert Paluba, Wladyslaw Surala, Piotr Jarosz
  • Patent number: 11303600
    Abstract: A social networking system provides a user interface for a sending user to send messages to a recipient user in association with a content item posted by the recipient user in the social networking system. The sending user views a content item posted by the recipient user, such as a photograph. The sending user posts a direct message to the recipient user related to the content item. The direct message is displayed to the sending user superimposed over the content item. Subsequent direct messages in the conversation are also displayed superimposed over the content item.
    Type: Grant
    Filed: May 7, 2020
    Date of Patent: April 12, 2022
    Assignee: Meta Platforms, Inc.
    Inventors: Kathleen Warner, Diego De Pinho Mendes, Yfat Eyal
  • Patent number: RE49044
    Abstract: A three-dimensional (“3D”) avatar can be automatically created that resembles the physical appearance of an individual captured in one or more input images or video frames. The avatar can be further customized by the individual in an editing environment and used in various applications, including but not limited to gaming, social networking and video conferencing.
    Type: Grant
    Filed: September 28, 2018
    Date of Patent: April 19, 2022
    Assignee: Apple Inc.
    Inventors: Alex Nelson, Cedric Bray, Thomas Goossens, Rudolph Van Der Merwe, Richard E. Crandall, Bertrand Serlet