Patents Issued in January 9, 2024
  • Patent number: 11869135
    Abstract: A three-dimensional representation of a scene captured in an action shot base video may be determined. The three-dimensional representation may identify a camera pose. A representation of an object may be determined from a multi-view representation of the object that includes images of the object and that is navigable in one or more dimensions. An action shot video of the scene that includes a rendering of the object determined based on the representation and the camera pose may be generated.
    Type: Grant
    Filed: January 8, 2021
    Date of Patent: January 9, 2024
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Julius Santiago, Milos Vlaski, Endre Ajandi, Radu Bogdan Rusu
  • Patent number: 11869136
    Abstract: Systems and methods for generating and providing augmented virtual environments can include obtaining user data, processing the user data to determine a plurality of objects associated with the user data, and generating one or more renderings of the objects in an environment. The renderings can be generated based on a plurality of rendering datasets obtained based on the plurality of determined objects determined to available to a user. The plurality of rendering datasets can include a plurality of three-dimensional meshes and/or a plurality of neural radiance field datasets. The one or more renderings can be provided via an interactive user interface that can allow a user to view renderings of different views of the objects in the environment from different positions and view directions.
    Type: Grant
    Filed: February 22, 2023
    Date of Patent: January 9, 2024
    Assignee: GOOGLE LLC
    Inventor: Igor Bonaci
  • Patent number: 11869137
    Abstract: The electronic apparatus includes a memory stored with a multiple light field unit (LFU) structure in which a plurality of light fields is arranged in a lattice structure, and a processor configured to, based on a view position within the lattice structure being determined, generate a 360-degree image based on the view position by using the multiple LFU structure, and the processor is configured to select an LFU to which the view position belongs from among the multiple LFU structure, allocate a rendering field-of-view (FOV) in predetermined degrees based on the view position, generate a plurality of view images based on a plurality of light fields comprising the selected LFU and the allocated FOV, and generate the 360-degree image by incorporating the generated plurality of view images.
    Type: Grant
    Filed: August 12, 2020
    Date of Patent: January 9, 2024
    Assignee: INHA UNIVERSITY RESEARCH AND BUSINESS FOUNDATION
    Inventors: Chae Eun Rhee, Hyunmin Jung, Hyuk-Jae Lee
  • Patent number: 11869138
    Abstract: A system and method for volume rendering a light field, wherein the light field data is subjected to a layering scheme introducing a partitioning of the hogels into subsets. Each subset corresponding to a sub-volume of the layer volume, corresponds to the sub-region of the layer. Novel partitioning of the data combined with an efficient local memory caching technique, plenoptic downsampling strategies to reduce memory bandwidth requirements and volume rendering algorithm to produce a rendered light field image. A reduction in the total number of samples required can be obtained while still maintaining the quality of the resulting image. A method is also provided to order memory accesses aligned with ray calculations in order to maximize access coherency. Real-time layered scene decomposition can be combined with surface rendering method to create a hybrid real-time rendering method that supports rendering of scenes containing superimposed volumes and surfaces.
    Type: Grant
    Filed: December 16, 2022
    Date of Patent: January 9, 2024
    Assignee: Avalon Holographics Inc.
    Inventor: Matthew Hamilton
  • Patent number: 11869139
    Abstract: A method for generating a three-dimensional (3D) model of an object includes: capturing images of the object from a plurality of viewpoints, the images including color images; generating a 3D model of the object from the images, the 3D model including a plurality of planar patches; for each patch of the planar patches: mapping image regions of the images to the patch, each image region including at least one color vector; and computing, for each patch, at least one minimal color vector among the color vectors of the image regions mapped to the patch; generating a diffuse component of a bidirectional reflectance distribution function (BRDF) for each patch of planar patches of the 3D model in accordance with the at least one minimal color vector computed for each patch; and outputting the 3D model with the BRDF for each patch.
    Type: Grant
    Filed: January 5, 2023
    Date of Patent: January 9, 2024
    Assignee: Packsize LLC
    Inventors: Giulio Marin, Abbas Rafii, Carlo Dal Mutto, Kinh Tieu, Giridhar Murali, Alvise Memo
  • Patent number: 11869140
    Abstract: Improvements to graphics processing pipelines are disclosed. More specifically, the vertex shader stage, which performs vertex transformations, and the hull or geometry shader stages, are combined. If tessellation is disabled and geometry shading is enabled, then the graphics processing pipeline includes a combined vertex and graphics shader stage. If tessellation is enabled, then the graphics processing pipeline includes a combined vertex and hull shader stage. If tessellation and geometry shading are both disabled, then the graphics processing pipeline does not use a combined shader stage. The combined shader stages improve efficiency by reducing the number of executing instances of shader programs and associated resources reserved.
    Type: Grant
    Filed: April 19, 2021
    Date of Patent: January 9, 2024
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Mangesh P. Nijasure, Randy W. Ramsey, Todd Martin
  • Patent number: 11869141
    Abstract: Techniques related to validating an image based 3D model of a scene are discussed. Such techniques include detecting an object within a captured image used to generate the scene, projecting the 3D model to a view corresponding to the captured image to generate a reconstructed image, and comparing image regions of the captured and reconstructed images corresponding to the object to validate the 3D model.
    Type: Grant
    Filed: May 14, 2019
    Date of Patent: January 9, 2024
    Assignee: Intel Corporation
    Inventors: Xiaofeng Tong, Wenlong Li
  • Patent number: 11869142
    Abstract: The disclosure provides a method, device and a computer-readable medium for performing three-dimensional blood vessel reconstruction. The device includes an interface configured to receive a single-view two-dimensional image of a blood vessel of a patient, where the single-view two-dimensional image is a projection image acquired in a predetermined projection direction. The device further includes a processor configured to estimate three-dimensional information of the blood vessel from the single-view two-dimensional image using an inference model, and reconstruct a three-dimensional model of the blood vessel based on the three-dimensional information.
    Type: Grant
    Filed: December 21, 2021
    Date of Patent: January 9, 2024
    Assignee: SHENZHEN KEYA MEDICAL TECHNOLOGY CORPORATION
    Inventors: Junjie Bai, Shubao Liu, Youbing Yin, Feng Gao, Yue Pan, Qi Song
  • Patent number: 11869143
    Abstract: Provided are a cutting method, apparatus and system for a point cloud model. In an embodiment, the method includes: using one two-dimensional first cutting window to select a point cloud structure comprising a target object from among one point cloud model; adjusting the depth of the first cutting window, the length, width and depth of the first cutting window constituting one three-dimensional second cutting window, the target object being located in the second cutting window; identifying and marking all point cloud structures in the second cutting window to form a plurality of three-dimensional third cutting windows, the target object being located in one of the third cutting windows; and calculating the volume ratio of the point cloud structure in each third cutting window relative to the second cutting window, and selecting the third cutting window having the largest volume ratio.
    Type: Grant
    Filed: June 28, 2019
    Date of Patent: January 9, 2024
    Assignee: Siemens Ltd., China
    Inventors: Hai Feng Wang, Tao Fei
  • Patent number: 11869144
    Abstract: In some implementations, a device includes one or more sensors, one or more processors and a non-transitory memory. In some implementations, a method includes determining that a first portion of a physical environment is associated with a first saliency value and a second portion of the physical environment is associated with a second saliency value that is different from the first saliency value. In some implementations, the method includes obtaining, via the one or more sensors, environmental data corresponding to the physical environment. In some implementations, the method includes generating, based on the environmental data, a model of the physical environment by modeling the first portion with a first set of modeling features that is a function of the first saliency value and modeling the second portion with a second set of modeling features that is a function of the second saliency value.
    Type: Grant
    Filed: February 23, 2022
    Date of Patent: January 9, 2024
    Assignee: APPLE INC.
    Inventors: Payal Jotwani, Bo Morgan, Behrooz Mahasseni, Bradley W. Peebler, Dan Feng, Mark E. Drummond, Siva Chandra Mouli Sivapurapu
  • Patent number: 11869145
    Abstract: A method for projecting an input device, an electronic apparatus, and a non-transitory computer readable storage medium are provided. The method includes: identifying a three-dimensional (3D) model of an input device, wherein the input device comprises a keyboard and a mouse; acquiring an image of the input device captured by a camera in a virtual reality (VR) system; identifying at least one feature identifier of the input device in the image; calculating target information in the VR system corresponding to the at least one feature identifier; and projecting, according to the target information, the 3D model into a VR scene constructed by the VR system.
    Type: Grant
    Filed: June 13, 2023
    Date of Patent: January 9, 2024
    Assignee: Beijing Source Technology Co., Ltd.
    Inventor: Zixiong Luo
  • Patent number: 11869146
    Abstract: A three-dimensional model generation method includes: obtaining map information generated by camera calibration executed by controlling one or more cameras to shoot a subject from a plurality of viewpoints, the map information including three-dimensional points each indicating a position on the subject in a three-dimensional space; obtaining a first image from a first view point and a second image from a second viewpoint; determining a search range in the three-dimensional space, based on the map information, the search range including a first three-dimensional point on the subject, the first three-dimensional point corresponding to a first point in the first image; searching for a similar point that is similar to the first point, in a range in the second image which corresponds to the search range; and generating a three-dimensional model using a search result in the searching.
    Type: Grant
    Filed: May 19, 2022
    Date of Patent: January 9, 2024
    Assignee: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.
    Inventors: Kensho Teranishi, Satoshi Yoshikawa, Toru Matsunobu, Masaki Fukuda
  • Patent number: 11869147
    Abstract: A computer-implemented method of machine-learning including obtaining an architecture for a neural network which is configured to take as an input a 2D sketch, and to output a 3D model represented by the 2D sketch. The 3D model is a parameterized 3D model defined by a set of parameters consisting of a first subset of one or more parameters and a second subset of one or more parameters. The neural network is configured to selectively output a value for the set and take as input a value for the first subset from a user and output a value for the second subset. The method of machine-learning also includes teaching the neural network.
    Type: Grant
    Filed: August 20, 2021
    Date of Patent: January 9, 2024
    Assignee: DASSAULT SYSTEMES
    Inventor: Nicolas Beltrand
  • Patent number: 11869148
    Abstract: The present disclosure provides a three-dimensional object modeling method, an image processing method, and an image processing device.
    Type: Grant
    Filed: June 11, 2020
    Date of Patent: January 9, 2024
    Assignee: BEIJING CHENGSHI WANGLIN INFORMATION TECHNOLOGY CO., LTD.
    Inventors: Lianjiang Zhou, Haibo Guan, Xiaojun Duan, Zhongfeng Wang, Haiyang Li, Yi Yang, Chen Zhu, Hu Tian
  • Patent number: 11869149
    Abstract: In various embodiments, an unsupervised training application executes a neural network on a first point cloud to generate keys and values. The unsupervised training application generates output vectors based on a first query set, the keys, and the values and then computes spatial features based on the output vectors. The unsupervised training application computes quantized context features based on the output vectors and a first set of codes representing a first set of 3D geometry blocks. The unsupervised training application modifies the first neural network based on a likelihood of reconstructing the first point cloud, the quantized context features, and the spatial features to generate an updated neural network. A trained machine learning model includes the updated neural network, a second query set, and a second set of codes representing a second set of 3D geometry blocks and maps a point cloud to a representation of 3D geometry instances.
    Type: Grant
    Filed: May 13, 2022
    Date of Patent: January 9, 2024
    Assignee: NVIDIA Corporation
    Inventors: Ben Eckart, Christopher Choy, Chao Liu, Yurong You
  • Patent number: 11869150
    Abstract: Techniques are disclosed for providing an avatar personalized for a specific person based on known data from a relatively large population of individuals and a relatively small data sample of the specific person. Auto-encoder neural networks are used in a novel manner to capture latent-variable representations of facial models. Once such models are developed, a very limited data sample of a specific person may be used in combination with convolutional-neural-networks or statistical filters, and driven by audio/visual input during real-time operations, to generate a realistic avatar of the specific individual's face. In some embodiments, conditional variables may be encoded (e.g. gender, age, body-mass-index, ethnicity, emotional state). In other embodiments, different portions of a face may be modeled separately and combined at run-time (e.g., face, tongue and lips). Models in accordance with this disclosure may be used to generate resolution independent output.
    Type: Grant
    Filed: October 28, 2020
    Date of Patent: January 9, 2024
    Assignee: Apple Inc.
    Inventor: Andrew P. Mason
  • Patent number: 11869151
    Abstract: Aspects of the technology described herein relate to systems and techniques for finite element analysis of alternating electric fields such as tumor treating fields. A system may be configured to receive medical data of the patient, generate segmented medical data by performing segmentation using the medical data of the patient, generate a model for a transducer array configuration and the generated segmented medical data, wherein the transducer array configuration is configured to produce alternating electric fields, and determine one or more metrics of the alternating electric fields for each of the one or more transducer array configurations. The system may further compare the metrics to reference values and/or metrics of another transducer array configuration and/or determine and/or recommend a transducer array configuration based on the metrics.
    Type: Grant
    Filed: January 26, 2022
    Date of Patent: January 9, 2024
    Assignee: Beth Israel Deaconess Medical Center
    Inventors: Eric T. Wong, Edwin Lok
  • Patent number: 11869152
    Abstract: The present invention provides systems and methods for generating a 3D product mesh model and product dimensions from user images. The system is configured to receive one or more images of a user's body part, extract a body part mesh having a plurality of body part key points, generate a product mesh from an identified subset of the body part mesh, and generate one or more product dimensions in response to the selection of one or more key points from the product mesh. The system may output the product mesh, the product dimensions, or a manufacturing template of the product. In some embodiments, the system uses one or more machine learning modules to generate the body part mesh, identify the subset of the body part mesh, generate the product mesh, select the one or more key points, and/or generate the one or more product dimensions.
    Type: Grant
    Filed: May 11, 2021
    Date of Patent: January 9, 2024
    Assignee: Bodygram, Inc.
    Inventors: Chong Jin Koh, Kyohei Kamiyama, Nobuyuki Hayashi
  • Patent number: 11869153
    Abstract: Provided is a system for structured and controlled movement and viewing within a point cloud. The system may generate or obtain a plurality of data points and one or more waypoints for the point cloud, present a first subset of the plurality of data points in a field-of-view of a camera at an initial position and an initial orientation of a first waypoint, change the camera field-of-view from at least one of (i) the initial position to a modified position within a volume of positions defined by orientation controls of the first waypoint or (ii) the initial orientation to a modified orientation within a range of orientations defined by the orientation controls of the first waypoint, and may present a second subset of the plurality of data points in the camera field-of-view at one or more of the modified position and the modified orientation.
    Type: Grant
    Filed: August 20, 2020
    Date of Patent: January 9, 2024
    Assignee: Illuscio, Inc.
    Inventors: Joseph Bogacz, Robert Monaghan
  • Patent number: 11869154
    Abstract: A method and system are for acquiring a dataset created via an imaging modality, the dataset describing a 3D-shape of a 3D-object; adapting, in dependence on the 3D-shape of the 3D-object, a flat 2D-grid of intersecting grid lines to follow a surface curvature of the 3D-object to create an adapted grid, a distance between two neighbouring intersections, along the grid lines of the adapted grid following the surface curvature of the 3D-object, being equal to a respective corresponding distance between a respective two neighbouring intersections of the flat 2D-grid before the adapting; and outputting the adapted grid for display over at least one of the 3D-object and a virtual model of the 3D-object.
    Type: Grant
    Filed: October 2, 2019
    Date of Patent: January 9, 2024
    Assignee: SIEMENS HEALTHCARE GMBH
    Inventor: Bjoern Suchy
  • Patent number: 11869155
    Abstract: The current document is directed to methods and systems that provide an automated augmented-reality facility for determining whether or not personal items and luggage meet airline requirements. In one implementation, a semi-transparent three-dimensional image generated from a model of a personal item or luggage item of the maximum dimensions allowed by an airline is displayed to a user, on a user device that includes a camera, within an electronic image of the real scene encompassed by the field of view of the camera. The user can then position a real personal item or luggage item at a point in space corresponding to the apparent position of the virtual image of the model and compare the dimensions of the real personal item or luggage item to the dimensions of the semi-transparent model volume in order to determine whether or not the real personal item or luggage item meets airline size requirements.
    Type: Grant
    Filed: February 19, 2020
    Date of Patent: January 9, 2024
    Assignee: APP IN THE AIR, INC.
    Inventors: Bayram Annakov, Sergey Pronin
  • Patent number: 11869156
    Abstract: Eyewear presenting text corresponding to spoken words (e.g., in speech bubbles) and optionally translating from one language to another. In one example, an interactive augmented reality experience is provided between two users of eyewear devices to allow one user of an eyewear device to share a personal attribute of the user with a second user. The personal attribute can be speech spoken by a remote second user of eyewear converted to text. The converted text can be displayed on a display of eyewear of the first user proximate the viewed second user. The personal attribute may be displayed in a speech bubble proximate the second user, such as proximate the head or mouth of the second user. The language of the spoken speech can be recognized by the second user eyewear, and translated to a language that is understood by the first user.
    Type: Grant
    Filed: June 29, 2021
    Date of Patent: January 9, 2024
    Assignee: Snap Inc.
    Inventors: Ilteris Canberk, Shin Hwun Kang, Dmytro Kucher
  • Patent number: 11869157
    Abstract: Example systems and methods for virtual visualization of a three-dimensional (3D) model of an object in a two-dimensional (2D) environment. The method may include projecting a ray from a user device to a ground plane and determining an angle at which the projected ray touches the ground plane. The method further helps determine a level for the ground plane for positioning the 3D model of the object in the 2D environment.
    Type: Grant
    Filed: October 21, 2021
    Date of Patent: January 9, 2024
    Assignee: West Texas Technology Partners, LLC
    Inventor: Milos Jovanovic
  • Patent number: 11869158
    Abstract: A cross reality system enables any of multiple devices to efficiently render shared location-based content. The cross reality system may include a cloud-based service that responds to requests from devices to localize with respect to a stored map. The service may return to the device information that localizes the device with respect to the stored map. In conjunction with localization information, the service may provide information about locations in the physical world proximate the device for which virtual content has been provided. Based on information received from the service, the device may render, or stop rendering, virtual content to each of multiple users based on the user's location and specified locations for the virtual content.
    Type: Grant
    Filed: May 25, 2022
    Date of Patent: January 9, 2024
    Assignee: Magic Leap, Inc.
    Inventors: Timothy Dean Caswell, Konrad Piascik, Leonid Zolotarev, Mark Ashley Rushton
  • Patent number: 11869159
    Abstract: A laser scanner is used with a mixed reality device to track and/or locate objects in an environment, such as a construction site. In some configurations, mixed reality is used to assist laser scanning. A collection of data points representing a point cloud can be acquired with a laser scanner. A reference frame of a mixed-reality device is aligned to the data of the point cloud. A graphic is presented on a display of the mixed-reality device. The graphic is positioned on the display in relation to the environment, based on the reference frame of the mixed-reality device being aligned to data of the point cloud. An item in the environment is tracked (e.g., a hazard or a tool). Data is provided to the mixed-reality device regarding a position of the item in the environment.
    Type: Grant
    Filed: May 26, 2022
    Date of Patent: January 9, 2024
    Assignee: Trimble Inc.
    Inventors: Kent Kahle, Jordan Lawver
  • Patent number: 11869160
    Abstract: Interference-based augmented reality hosting platforms are presented. Hosting platforms can include networking nodes capable of analyzing a digital representation of scene to derive interference among elements of the scene. The hosting platform utilizes the interference to adjust the presence of augmented reality objects within an augmented reality experience. Elements of a scene can constructively interfere, enhancing presence of augmented reality objects; or destructively interfere, suppressing presence of augmented reality objects.
    Type: Grant
    Filed: October 24, 2022
    Date of Patent: January 9, 2024
    Assignee: Nant Holdings IP, LLC
    Inventor: Patrick Soon-Shiong
  • Patent number: 11869161
    Abstract: Various embodiments are generally directed to techniques of overlaying a virtual object on a physical object in augmented reality (AR). A computing device may receive one or more images of the physical object, perform analysis on the images (such as image segmentation) to generate a digital outline, and determine a position and a scale of the physical object based at least in part on the digital outline. The computing device may configure (e.g., rotate, scale) a 3D model of the physical object to match the determined position and scale of the physical object. The computing device may place or overlay a 3D virtual object on the physical object in AR based on a predefined location relation between the 3D virtual object and the 3D model of the physical object, and further, generate a composite view of the placement or overlay.
    Type: Grant
    Filed: February 3, 2023
    Date of Patent: January 9, 2024
    Assignee: Capital One Services, LLC
    Inventors: Micah Price, Geoffrey Dagley, Staevan Duckworth, Qiaochu Tang, Jason Hoover, Stephen Wylie, Olalekan Awoyemi
  • Patent number: 11869162
    Abstract: A processor-implemented method includes: adjusting a virtual content object based on a shape of the virtual content object projected onto a projection plane; and visualizing the adjusted virtual content object on the projection plane.
    Type: Grant
    Filed: January 4, 2021
    Date of Patent: January 9, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Heesae Lee, Hyun Sung Chang
  • Patent number: 11869163
    Abstract: Systems and methods are provided for machine learning-based rendering of a clothed human with a realistic 3D appearance by virtually draping one or more garments or items of clothing on a 3D human body model. The machine learning model may be trained to drape a garment on a 3D body mesh using training data that includes a variety 3D body meshes reflecting a variety of different body types. The machine learning model may include an encoder trained to extract body features from an input 3D mesh, and a decoder network trained to drape the garment on the input 3D mesh based at least in part on spectral decomposition of a mesh associated with the garment. The trained machine learning model may then be used to drape the garment or a variation of the garment on a new input body mesh.
    Type: Grant
    Filed: September 17, 2021
    Date of Patent: January 9, 2024
    Assignee: Amazon Technologies, Inc.
    Inventors: Junbang Liang, Ming Lin, Javier Romero Gonzalez-Nicolas, Adam Douglas Peck, Chetan Shivarudrappa
  • Patent number: 11869164
    Abstract: The technical problem of creating an augmented reality (AR) experience that, on one hand, is accessible from a camera view user interface provided with a messaging client and that, also, can perform a modification based on a previously captured image of a user, is addressed by providing an AR component. When a user, while accessing the messaging client, engages a user selectable element representing the AR component in the camera view user interface, the messaging system loads the AR component in the messaging client. The AR component comprises a target media content object, which can be animation or live action video. The loaded AR component accesses a portrait image associated with a user and modifies the target media content using the portrait image. The resulting target media content object is displayed in the camera view user interface.
    Type: Grant
    Filed: May 25, 2022
    Date of Patent: January 9, 2024
    Assignee: Snap Inc.
    Inventors: Roman Golobokov, Aleksandr Mashrabov, Dmitry Matov, Jeremy Baker Voss
  • Patent number: 11869165
    Abstract: An avatar editing environment is disclosed that allows users to create custom avatars for use in online games and other applications. Starting with a blank face the user can add, rescale and position different elements (e.g., eyes, nose, mouth) on the blank face. The user can also change the shape of the avatar's face, the avatar's skin color and the color of all the elements. In some implementations, touch input and gestures can be used to manually edit the avatar. Various controls can be used to create the avatar, such as controls for resizing, rotating, positioning, etc. The user can choose between manual and automatic avatar creation. The avatar editing environment can be part of a framework that is available to applications. One or more elements of the avatar can be animated.
    Type: Grant
    Filed: October 21, 2022
    Date of Patent: January 9, 2024
    Assignee: Apple Inc.
    Inventors: Marcel Van Os, Thomas Goossens, Laurent Baumann, Michael Dale Lampell, Alexandre Carlhian
  • Patent number: 11869166
    Abstract: A microscope system comprising an eyepiece, an objective that guides light from a sample to the eyepiece, a tube lens that is disposed on a light path between the eyepiece and the objective and forms an optical image of the sample on the basis of light therefrom, a projection apparatus that projects a projection image including a first assistance image onto an image plane on which the optical image is formed, and a processor that performs processes. The processes include generating projection image data representing the projection image. The first assistance image is an image of the sample in which a region wider than an actual field of view corresponding to the optical image is seen, The first assistance image is projected onto a portion of the image plane that is close to an outer edge of the optical image.
    Type: Grant
    Filed: March 9, 2021
    Date of Patent: January 9, 2024
    Assignee: Evident Corporation
    Inventors: Tatsuo Nakata, Akifumi Kabeya, Takashi Yoneyama, Hiroshi Sasaki
  • Patent number: 11869167
    Abstract: The embodiments of the disclosure provide a method for transmitting reduced depth information and an electronic device. The method includes: obtaining a first depth map of an image frame, wherein the first depth map includes a plurality of first depth pixels corresponding to multiple image pixels in the image frame; downscaling the first depth map as a second depth map, wherein the second depth map includes multiple second depth pixels, and each second depth pixel has a depth value; adjusting the second depth map as a third depth map by scaling up the depth value of each second depth pixel based on a specific ratio; dividing the third depth map into multiple fourth depth maps; appending the fourth depth maps to the image frame; encoding the image frame appended with the fourth depth maps; and transmitting the encoded image frame to a client device.
    Type: Grant
    Filed: July 13, 2021
    Date of Patent: January 9, 2024
    Assignee: HTC Corporation
    Inventor: Yu-You Wen
  • Patent number: 11869168
    Abstract: A method for optimizing a display image based on display content is provided. The method is applicable to a display control chip, and includes following operations: receiving a video signal configured to transmit an image of a frame; with respect to multiple different sub-areas in an area of the image, calculating a pixel number distribution of each sub-area along multiple characteristic values; determining, according to the pixel number distribution, whether the sub-area comprises a corresponding first target pattern of multiple first target patterns; if the multiple sub-areas comprise the multiple first target patterns, respectively, performing a first preset image processing to the image to generate a processed image; if the multiple sub-areas are free from comprising the multiple first target patterns, respectively, omitting the first preset image processing to the image; and generating a display signal according to the processed image or the image.
    Type: Grant
    Filed: February 14, 2022
    Date of Patent: January 9, 2024
    Assignee: Realtek Semiconductor Corporation
    Inventors: Yuh-Wey Lin, Jui-Te Wei
  • Patent number: 11869169
    Abstract: The present disclosure describes devices and methods for generating RGB images from Bayer filter images using adaptive sub-pixel spatiotemporal interpolation. An electronic device includes a processor configured to estimate green values at red and blue pixel locations of an input Bayer frame based on green values at green pixel locations of the input Bayer frame and a kernel for green pixels, generate a green channel of a joint demosaiced-warped output RGB pixel from the input Bayer frame based on the green values at the green pixel locations, the kernel for green pixels, and an alignment vector map, and generate red and blue channels of the joint demosaiced-warped output RGB pixel from the input Bayer frame based on the estimated green values at the red and blue pixel locations, kernels for red and blue pixels, and the alignment vector map.
    Type: Grant
    Filed: February 4, 2022
    Date of Patent: January 9, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Nguyen Thang Long Le, Tyler Luu, John William Glotzbach, Hamid Rahim Sheikh
  • Patent number: 11869170
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a neural network. One of the methods includes receiving a training image and a ground truth super-resolution image; processing a first training network input comprising the training image using the neural network to generate a first training super-resolution image; processing a first critic input generated from (i) the training image and (ii) the ground truth super-resolution image using a critic neural network to map the first critic input to a latent representation; processing a second critic input generated from (i) the training image and (ii) the first training super-resolution image using the critic neural network to map the second critic input to a latent representation; determining a gradient of a generator loss function that measures a distance between the latent representations of the critic inputs; and determining an update to the parameters.
    Type: Grant
    Filed: November 18, 2019
    Date of Patent: January 9, 2024
    Assignee: Google LLC
    Inventors: David Berthelot, Ian Goodfellow
  • Patent number: 11869171
    Abstract: Embodiments are generally directed to an adaptive deformable kernel prediction network for image de-noising. An embodiment of a method for de-noising an image by a convolutional neural network implemented on a compute engine, the image including a plurality of pixels, the method comprising: for each of the plurality of pixels of the image, generating a convolutional kernel having a plurality of kernel values for the pixel; generating a plurality of offsets for the pixel respectively corresponding to the plurality of kernel values, each of the plurality of offsets to indicate a deviation from a pixel position of the pixel; determining a plurality of deviated pixel positions based on the pixel position of the pixel and the plurality of offsets; and filtering the pixel with the convolutional kernel and pixel values of the plurality of deviated pixel positions to obtain a de-noised pixel.
    Type: Grant
    Filed: November 5, 2020
    Date of Patent: January 9, 2024
    Assignee: INTEL CORPORATION
    Inventors: Anbang Yao, Ming Lu, Yikai Wang, Xiaoming Chen, Junjie Huang, Tao Lv, Yuanke Luo, Yi Yang, Feng Chen, Zhiming Wang, Zhiqiao Zheng, Shandong Wang
  • Patent number: 11869172
    Abstract: Embodiments are disclosed for generating lens blur effects. The disclosed systems and methods comprise receiving a request to apply a lens blur effect to an image, the request identifying an input image and a first disparity map, generating a plurality of disparity maps and a plurality of distance maps based on the first disparity map, splatting influences of pixels of the input image using a plurality of reshaped kernel gradients, gathering aggregations of the splatted influences, and determining a lens blur for a first pixel of the input image in an output image based on the gathered aggregations of the splatted influences.
    Type: Grant
    Filed: November 14, 2022
    Date of Patent: January 9, 2024
    Assignee: Adobe Inc.
    Inventors: Haiting Lin, Yumin Jia, Jen-Chan Chien
  • Patent number: 11869173
    Abstract: Various disclosed embodiments are directed to inpainting one or more portions of a target image based on merging (or selecting) one or more portions of a warped image with (or from) one or more portions of an inpainting candidate (e.g., via a learning model). This, among other functionality described herein, resolves the inaccuracies of existing image inpainting technologies.
    Type: Grant
    Filed: December 27, 2022
    Date of Patent: January 9, 2024
    Assignee: Adobe Inc.
    Inventors: Yuqian Zhou, Elya Shechtman, Connelly Stuart Barnes, Sohrab Amirghodsi
  • Patent number: 11869174
    Abstract: An image processing apparatus according to the present invention, includes at least one memory and at least one processor function as: a first determining unit configured to determine a first brightness range, which is included in a dynamic range of a first image and is higher than a predetermined brightness; and a first converting unit configured to convert the first image into a second image, of which a dynamic range is narrower than the dynamic range of the first image, based on the determination result by the first determining unit, wherein based on the first brightness range determined by the first determining unit, the first converting unit determines a second brightness range, which is included in the dynamic range of the second image and corresponds to the first brightness range.
    Type: Grant
    Filed: March 9, 2022
    Date of Patent: January 9, 2024
    Assignee: CANON KABUSHIKI KAISHA
    Inventors: Masaki Fujioka, Masaharu Yamagishi, Takehito Fukushima
  • Patent number: 11869175
    Abstract: An image reproduction system and methods for providing colorant data to an end device. A method includes extracting general HSV value data for each pixel of an image from image data. For each pixel, the general HSV value data is transformed to generate universal perceived brightness, Bp, and universal perceived chroma, Cp, value data. End-device colorant data associated with the general HSV value data is retrieved for each pixel and scaled using the Bp and Cp value data to obtain scaled end-device colorant data. The scaled end-device colorant data is transmitted to the end device.
    Type: Grant
    Filed: June 10, 2022
    Date of Patent: January 9, 2024
    Inventor: Edward M. Granger
  • Patent number: 11869176
    Abstract: This invention relates to a hyperspectral imaging system for denoising and/or color unmixing multiple overlapping spectra in a low signal-to-noise regime with a fast analysis time. This system may carry out Hyper-Spectral Phasors (HySP) calculations to effectively analyze hyper-spectral time-lapse data. For example, this system may carry out Hyper-Spectral Phasors (HySP) calculations to effectively analyze five-dimensional (5D) hyper-spectral time-lapse data. Advantages of this imaging system may include: (a) fast computational speed, (b) the ease of phasor analysis, and (c) a denoising algorithm to obtain the minimally-acceptable signal-to-noise ratio (SNR). An unmixed color image of a target may be generated. These images may be used in diagnosis of a health condition, which may enhance a patient's clinical outcome and evolution of the patient's health.
    Type: Grant
    Filed: January 31, 2020
    Date of Patent: January 9, 2024
    Assignee: UNIVERSITY OF SOUTHERN CALIFORNIA
    Inventors: Wen Shi, Eun Sang Koo, Scott E. Fraser, Francesco Cutrale
  • Patent number: 11869177
    Abstract: Provided is an inspection support system capable of improving work efficiency or work accuracy in inspection of structures. A self-traveling apparatus autonomously travels in response to a first travel command based on a first inspection image to a target position, and shoots photographing targets captured in the first inspection image to acquire a second inspection image; and an information processing apparatus extracts, from among the photographing targets captured in the second inspection image, a matched target that matches the photographing target captured in the first inspection image, and correlates the identification information that identifies, in an identifiable manner, the photographing targets that are captured in the first inspection image and are matched to the matched target, with the matched target.
    Type: Grant
    Filed: February 28, 2020
    Date of Patent: January 9, 2024
    Inventors: Fuminori Yamasaki, Takashi Karino
  • Patent number: 11869178
    Abstract: A method of predicting virtual metrology data for a wafer lot that includes receiving first image data from an imager system, the first image data relating to at least one first wafer lot, receiving measured metrology data from metrology equipment relating to the at least one first wafer lot, applying one or more machine learning techniques to the first image data and the measured metrology data to generate at least one predictive model for predicting at least one of virtual metrology data or virtual cell metrics data of wafer lots, and utilizing the at least one generated predictive model to generate at least one of first virtual metrology data or first virtual cell metrics data for the first wafer lot.
    Type: Grant
    Filed: December 10, 2020
    Date of Patent: January 9, 2024
    Assignee: Micron Technology, Inc.
    Inventors: Amitava Majumdar, Qianlan Liu, Pradeep Ramachandran, Shawn D. Lyonsmith, Steve K. McCandless, Ted L. Taylor, Ahmed N. Noemaun, Gordon A. Haller
  • Patent number: 11869179
    Abstract: An abnormal part display apparatus, an abnormal part display system, an abnormal part displaying method, and an abnormal part displaying program capable of improving visibility of an abnormal part in an object are provided. An abnormal part display apparatus 11 according to the present disclosure includes an acquisition unit 111 configured to acquire point group data of an object obtained by measuring the object by using a laser ranging apparatus 12, and a photograph image of the object obtained by photographing the object by using a photographing apparatus 13, a display unit 112 configured to display the point group data and the photograph image on a predetermined screen, and a control unit 113 configured to control the point group data and the photograph image to be displayed in the display unit 112.
    Type: Grant
    Filed: March 5, 2021
    Date of Patent: January 9, 2024
    Assignee: NEC CORPORATION
    Inventors: Shigeo Suzuki, Taisuke Tanabe, Hiroshi Matsumoto, Takanori Shigeta, Junichi Abe, Akira Tsuji, Yoshimasa Ono, Jiro Abe
  • Patent number: 11869180
    Abstract: The invention provides a measurement method and system of undeformed chip thickness in micro-milling. This method includes the steps of: S1: acquiring a surface topography picture of the bottom of a flute after micro-milling; S2: extracting a tool mark at the central line of the flute from the surface topography picture; S3: calculating a spacing distance between adjacent tool marks and calculating the difference of equivalent cutting radius between adjacent cutter teeth based on the spacing distance between adjacent tool marks; and S4: reconstructing the instant undeformed chip thickness in micro-milling based on the difference of equivalent cutting radius between adjacent cutter teeth.
    Type: Grant
    Filed: July 12, 2022
    Date of Patent: January 9, 2024
    Assignee: SOOCHOW UNIVERSITY
    Inventors: Tongshun Liu, Kedong Zhang, Chengdong Wang, Xuhong Guo
  • Patent number: 11869181
    Abstract: A determination method of non-destructively and easily determining a state of an aggregate of a plurality of cells formed by three-dimensional culture is provided. A determination method according to the disclosed technology includes generating a phase difference image of an aggregate of a plurality of cells from a hologram obtained by imaging the aggregate, deriving a first index value that indicates a randomness of an array of a phase difference amount in a plurality of pixels constituting the phase difference image, and determining a state of the cells constituting the aggregate on the basis of the first index value.
    Type: Grant
    Filed: February 23, 2021
    Date of Patent: January 9, 2024
    Assignee: FUJIFILM Corporation
    Inventors: Sohichiro Nakamura, Sho Onozawa, Ryusuke Osaki
  • Patent number: 11869182
    Abstract: A method is proposed for identifying (“segmenting”) at least one portion of the skin of an animal which is a region of interest (e.g. a portion which is subject to an abnormality such as a tumor). The method uses at least a temperature dataset obtained by measuring the temperature of each of a plurality of points of a region of the skin. An initial segmentation may be performed using the temperature data based on a statistical model, in which each point is segmented based on its temperature and optionally that of its neighbors. The initial segmentation based on the temperature data may be improved using a three-dimensional model of the profile of the skin, and the enhanced segmentation may be used to improve the three-dimensional model.
    Type: Grant
    Filed: June 14, 2019
    Date of Patent: January 9, 2024
    Assignee: Fuel 3D Technologies Limited
    Inventors: Chris Kane, Leonardo Rubio Navarro, Adeala Zabair, Anna Chabokdast, James Klatzow
  • Patent number: 11869183
    Abstract: An endoscope processor according to one aspect includes an image acquisition unit that acquires a captured image from an endoscope, a first correction unit that corrects the captured image acquired by the image acquisition unit, a second correction unit that corrects the captured image acquired by the image acquisition unit, and an output unit that outputs an endoscopic image based on the captured image corrected by the first correction unit and a recognition result using a trained image recognition model in which the recognition result is output in a case where the captured image corrected by the second correction unit is input.
    Type: Grant
    Filed: August 30, 2019
    Date of Patent: January 9, 2024
    Assignee: HOYA CORPORATION
    Inventor: Akihiko Nishide
  • Patent number: 11869184
    Abstract: The present invention relates to a method of assisting in diagnosis of a target heart disease using a retinal image, the method including: obtaining a target retinal image which is obtained by imaging a retina of a testee; on the basis of the target retinal image, obtaining heart disease diagnosis assistance information of the testee according to the target retinal image, via a heart disease diagnosis assistance neural network model which obtains diagnosis assistance information that is used for diagnosis of the target heart disease according to the retinal image; and outputting the heart disease diagnosis assistance information of the testee.
    Type: Grant
    Filed: June 24, 2021
    Date of Patent: January 9, 2024
    Assignee: Medi Whale Inc.
    Inventors: Tae Geun Choi, Geun Yeong Lee, Hyung Taek Rim