Patents Examined by Saptarshi Mazumder
  • Patent number: 11972528
    Abstract: A method for processing model data of a set of garments includes storing first and second model data of a first and a second garment of the set, each of the model data including two-dimensional or three-dimensional geometry data defining a mesh associated with the respective garment. A limiting object of the respective garment is defined by at least a portion of the geometry data and constitutes a separation between an interior and an exterior of the respective garment. The first and the second garment constitute an inner garment and an outer garment that is worn over the inner garment. At least one opening object for the outer garment is stored, defined as a portion of the limiting object and constituting a transition for an item between the interior and the exterior of the outer garment. Intersection objects are determined for each of the garments, defining one or more intersections between the limiting objects of the garments.
    Type: Grant
    Filed: March 29, 2022
    Date of Patent: April 30, 2024
    Assignee: Reactive Reality GmbH
    Inventors: Stefan Hauswiesner, Philipp Grasmug, Alexander Pilz
  • Patent number: 11961192
    Abstract: Improved techniques for re-localizing Internet-of-Things (IOT) devices are disclosed herein. Sensor data digitally representing one or more condition(s) monitored by an IOT device is received. In response, a sensor readings map is accessed, where this map is associated with the IOT device. The map also digitally represents the IOT device's environment and includes data representative of a location of the IOT device within the environment. The map also includes data representative of the conditions monitored by the IOT device. Additionally, the map is updated by attaching the sensor data to the map. In some cases, a coverage map can also be computed. Both the sensors readings map and the coverage map can be automatically updated in response to the IOT device being re-localized.
    Type: Grant
    Filed: November 3, 2021
    Date of Patent: April 16, 2024
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Michael Bleyer, Yuri Pekelny, Raymond Kirk Price
  • Patent number: 11961189
    Abstract: The subject technology generates depth data using a machine learning model based at least in part on captured image data from at least one camera of a client device. The subject technology applies, to the captured image data and the generated depth data, a 3D effect based at least in part on an augmented reality content generator. The subject technology generates a depth map using at least the depth data. The subject technology generates a packed depth map based at least in part on the depth map, the generating the packed depth map. The subject technology converts a single channel floating point texture to a raw depth map. The subject technology generates multiple channels based at least in part on the raw depth map. The subject technology generates a segmentation mask based at least on the captured image data. The subject technology performs background inpainting and blurring of the captured image data using at least the segmentation mask to generate background inpainted image data.
    Type: Grant
    Filed: May 5, 2023
    Date of Patent: April 16, 2024
    Assignee: Snap Inc.
    Inventors: Kyle Goodrich, Samuel Edward Hare, Maxim Maximov Lazarov, Tony Mathew, Andrew James McPhee, Daniel Moreno, Dhritiman Sagar, Wentao Shang
  • Patent number: 11900322
    Abstract: Disclosed herein is software technology that facilitates collaboration on a BIM file. In one aspect, disclosed herein is a method that involves (1) providing presence information to a first client station associated with a first individual that is viewing a first rendered 3D model of a construction project, wherein the presence information comprises (a) information identifying at least a second individual that is viewing a second rendered 3D model of the construction project via a second client station, and (b) an indication of a position and orientation at which the second rendered 3D model is being rendered, (2) receiving, from the second client station, an indication of a change in either the position or orientation at which the second rendered 3D model is being rendered, (3) updating the presence information based on the received indication, and (4) providing the updated presence information to at least the first client station.
    Type: Grant
    Filed: March 20, 2020
    Date of Patent: February 13, 2024
    Assignee: Procore Technologies, Inc.
    Inventors: Kevin McKee, Ben Burlingham
  • Patent number: 11893693
    Abstract: Generating and storing digital media can be resource intensive processes. Some systems and methods disclosed herein relate to generating digital media using a pre-existing three-dimensional (3D) model of an object and feature points of the object. According to an embodiment, a method includes an e-commerce platform receiving a request for digital media depicting an object. In response to the request, the e-commerce platform may obtain a 3D model corresponding to the object and data pertaining to one or more feature points of the object. The one or more feature points may correspond to respective views of the 3D model. The e-commerce platform may then generate the digital media based on the 3D model and the one or more feature points, where the digital media could include renders of the 3D model depicting the one or more feature points.
    Type: Grant
    Filed: June 14, 2021
    Date of Patent: February 6, 2024
    Inventors: Jonathan Wade, Juho Mikko Haapoja, Stephan Leroux, Daniel Beauchamp
  • Patent number: 11893316
    Abstract: The present disclosure is directed to a software tool that facilitates coordination between various parties that are involved in the process of rectifying a problem identified in a combined three-dimensional model file. In one implementation, the software tool may cause a computing device to (a) receive an indication requesting creation of a coordination issue that relates to a portion of a rendered three-dimensional view of a construction project, (b) in response to the receipt of the indication, create a data set defining the coordination issue, the data set including (i) a representation of the portion of the rendered three-dimensional view, and (ii) data indicating an assignee of the coordination issue, and (c) cause an indication of the coordination issue to be presented to a client station associated with the assignee.
    Type: Grant
    Filed: December 19, 2022
    Date of Patent: February 6, 2024
    Assignee: Procore Technologies, Inc.
    Inventors: Dave McCool, Chris Bindloss
  • Patent number: 11887233
    Abstract: Methods and systems are provided preparing training data for training a neural network to simulate deformations of a surface of a CG character, for training a neural network to simulate deformations of a surface of a CG character and for employing a neural network to simulate deformations of a surface of a CG character. Matrix decomposition techniques are used to generate the training data and are subsequently used by trained neural networks during inference to reconstruct CG character surfaces. The inference methods and systems are suitable for real time animation applications.
    Type: Grant
    Filed: February 18, 2022
    Date of Patent: January 30, 2024
    Assignee: Digital Domain Virtual Human (US), Inc.
    Inventor: David Sebastian Minor
  • Patent number: 11887283
    Abstract: Devices, methods, and non-transitory program storage devices are disclosed herein to provide for improved perspective distortion correction for wide field of view (FOV) video image streams. The techniques disclosed herein may be configured, such that the distortion correction applied to requested region of interest (ROI) portions taken from individual images of the wide FOV video image stream smoothly transitions between applying different distortion correction to ROIs, depending on their respective FOVs. In particular, the techniques disclosed herein may modify the types and/or amounts of perspective distortion correction applied, based on the FOVs of the ROIs, as well as their location within the original wide FOV video image stream. In some cases, additional perspective distortion correction may also be applied to account for tilt in an image capture device as the wide FOV video image stream is being captured and/or the unwanted inclusion of “invalid” pixels from the wide FOV image.
    Type: Grant
    Filed: March 28, 2022
    Date of Patent: January 30, 2024
    Assignee: Apple Inc.
    Inventors: Jianping Zhou, Ali-Amir Aldan, Sebastien X. Beysserie
  • Patent number: 11869143
    Abstract: Provided are a cutting method, apparatus and system for a point cloud model. In an embodiment, the method includes: using one two-dimensional first cutting window to select a point cloud structure comprising a target object from among one point cloud model; adjusting the depth of the first cutting window, the length, width and depth of the first cutting window constituting one three-dimensional second cutting window, the target object being located in the second cutting window; identifying and marking all point cloud structures in the second cutting window to form a plurality of three-dimensional third cutting windows, the target object being located in one of the third cutting windows; and calculating the volume ratio of the point cloud structure in each third cutting window relative to the second cutting window, and selecting the third cutting window having the largest volume ratio.
    Type: Grant
    Filed: June 28, 2019
    Date of Patent: January 9, 2024
    Assignee: Siemens Ltd., China
    Inventors: Hai Feng Wang, Tao Fei
  • Patent number: 11869139
    Abstract: A method for generating a three-dimensional (3D) model of an object includes: capturing images of the object from a plurality of viewpoints, the images including color images; generating a 3D model of the object from the images, the 3D model including a plurality of planar patches; for each patch of the planar patches: mapping image regions of the images to the patch, each image region including at least one color vector; and computing, for each patch, at least one minimal color vector among the color vectors of the image regions mapped to the patch; generating a diffuse component of a bidirectional reflectance distribution function (BRDF) for each patch of planar patches of the 3D model in accordance with the at least one minimal color vector computed for each patch; and outputting the 3D model with the BRDF for each patch.
    Type: Grant
    Filed: January 5, 2023
    Date of Patent: January 9, 2024
    Assignee: Packsize LLC
    Inventors: Giulio Marin, Abbas Rafii, Carlo Dal Mutto, Kinh Tieu, Giridhar Murali, Alvise Memo
  • Patent number: 11869152
    Abstract: The present invention provides systems and methods for generating a 3D product mesh model and product dimensions from user images. The system is configured to receive one or more images of a user's body part, extract a body part mesh having a plurality of body part key points, generate a product mesh from an identified subset of the body part mesh, and generate one or more product dimensions in response to the selection of one or more key points from the product mesh. The system may output the product mesh, the product dimensions, or a manufacturing template of the product. In some embodiments, the system uses one or more machine learning modules to generate the body part mesh, identify the subset of the body part mesh, generate the product mesh, select the one or more key points, and/or generate the one or more product dimensions.
    Type: Grant
    Filed: May 11, 2021
    Date of Patent: January 9, 2024
    Assignee: Bodygram, Inc.
    Inventors: Chong Jin Koh, Kyohei Kamiyama, Nobuyuki Hayashi
  • Patent number: 11830139
    Abstract: Aspects described provide systems and methods that relate generally to image analysis and, more specifically, overlaying images onto a three-dimensional model of a vehicle. The systems and methods include a vehicle application that receives vehicle information via a user interface. The vehicle application receives a plurality of images that comprise images of one or more internal components of the vehicle and images of one or more external components of the vehicle. The vehicle application utilizes a machine learning model to classify actual current images and attach, overlay, and wrap the actual current images to a three-dimensional model of the vehicle. The machine learning model and vehicle application meshes the actual current images around the three-dimensional model of the vehicle to create a three-dimensional view of the vehicle. The vehicle application further displays via the user interface the three-dimensional view of the vehicle.
    Type: Grant
    Filed: September 3, 2021
    Date of Patent: November 28, 2023
    Assignee: Capital One Services, LLC
    Inventors: Chih-Hsiang Chow, Steven Dang, Elizabeth Furlan
  • Patent number: 11806094
    Abstract: Methods, apparatus and computer software products implement embodiments of the present invention that include applying energy to a probe that is in contact with tissue in a body cavity so as to ablate the tissue. While applying the energy, signals are received from a location transducer in the probe, which are indicative of a location of the probe in the cavity. The signals are processed so as to derive 3D location coordinate points corresponding to the location of the probe at a sequence of times during which the energy was applied. While applying the energy, a 3D representation of the body cavity is rendered to a display, and visual indicators are superimposed on the 3D representation, the visual indicators corresponding to the 3D location coordinate points at the sequence of times. Finally, a linear trace connecting the coordinate points in accordance with the sequence is superimposed on the 3D representation.
    Type: Grant
    Filed: March 5, 2020
    Date of Patent: November 7, 2023
    Assignee: Biosense Webster (Israel) Ltd.
    Inventors: Amiram Sheiner, Assaf Cohen, Ilya Shtirberg, Maxim Galkin
  • Patent number: 11810245
    Abstract: An illustrative server system receives a request from a client device to join a virtual world managed by the server system. In response to receiving the request, the server system identifies an avatar location for an avatar that is to represent a user of the client device within the virtual world when the client device joins the virtual world. Based on the avatar location, the server system determines a first priority value for a first asset of the virtual world and a second priority value for a second asset of the virtual world. Based on the first and second priority values, the server system provides to the client device a prioritized replication of the virtual world in which the first asset is replicated in a prioritized manner that allows the user to experience the first asset before the second asset is replicated. Corresponding methods and systems are also disclosed.
    Type: Grant
    Filed: August 25, 2021
    Date of Patent: November 7, 2023
    Assignee: Verizon Patent and Licensing Inc.
    Inventors: William Robert Davey, James Susinno, Vito Joseph Messina, Oliver S. Castaneda
  • Patent number: 11798214
    Abstract: Systems and methods are described for applying a unifying visual effect, such as posterization, to all or most of the visual elements in a film. In one implementation, a posterization standard includes a line work standard, a color palette, a plurality of color blocks characterized by one or more hard edges, and a gradient transition associated with each of the hard edges. The visual elements, including live actors and set pieces, are prepared in accordance with the posterization standard. The actors are filmed performing live among the set pieces. The live-action segments can be composited with digital elements. The result is a combination of both real and stylized elements, captured simultaneously, to produce an enhanced hybrid of live action and animation.
    Type: Grant
    Filed: July 14, 2021
    Date of Patent: October 24, 2023
    Assignee: Trioscope Studios, LLC
    Inventors: Grzegorz Jonkajtys, L. Chad Crowley
  • Patent number: 11797724
    Abstract: Systems and techniques are provided for determining environmental layouts. For example, based on one or more images of an environment and depth information associated with the one or more images, a set of candidate layouts and a set of candidate objects corresponding to the environment can be detected. The set of candidate layouts and set of candidate objects can be organized as a structured tree. For instance, a structured tree can be generated including nodes corresponding to the set of candidate layouts and the set of candidate objects. A combination of objects and layouts can be selected in the structured tree (e.g., based on a search of the structured tree, such as using a Monte-Carlo Tree Search (MCTS) algorithm or adapted MCTS algorithm). A three-dimensional (3D) layout of the environment can be determined based on the combination of objects and layouts in the structured tree.
    Type: Grant
    Filed: November 8, 2021
    Date of Patent: October 24, 2023
    Assignee: QUALCOMM Incorporated
    Inventors: Shreyas Hampali, Sinisa Stekovic, Friedrich Fraundorfer, Vincent Lepetit
  • Patent number: 11783545
    Abstract: Disclosed are editing tools for manipulating a three-dimensional (“3D”) data file or point cloud. An editing application may generate a visualization of the 3D data file or point cloud, and a user may invoke an editing tool over a particular region of the visualization that is rendered based on the positional and non-positional values of a first data point set and a second data point set from the 3D data file or point cloud. The editing tool may differentiate the first data point set from the second data point set based on unique commonality in the positional and/or non-positional values of the first data point set, and may edit less than all of the particular region by adjusting one or more of the positional and/or non-positional values of the first data point set while retaining the positional and non-positional values of the second data point set.
    Type: Grant
    Filed: May 11, 2023
    Date of Patent: October 10, 2023
    Assignee: Illuscio, Inc.
    Inventor: Max Good
  • Patent number: 11776175
    Abstract: A computing system for adaptive point generation includes a storage to store a densely sampled polyline or surface, or mathematical function, and a processor to compute the area of a contour of the polyline or function with respect to itself, or compute the volume of the surface or function with respect to itself, adaptively resample the polyline, surface, or function, wherein the adaptive resampling is based on and inversely proportional to the computed area or volume, and connect adaptively resampled points as an adaptively sampled polyline or surface.
    Type: Grant
    Filed: April 26, 2018
    Date of Patent: October 3, 2023
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventor: Nathan Moroney
  • Patent number: 11778007
    Abstract: A content providing server provides virtual reality content. The content providing server includes a requesting unit configured to receive a request for virtual reality content from a user device; a dynamic object image processing unit configured to render an image of a dynamic object contained in the virtual reality content; a static object image processing unit configured to render an image of a static object contained in the virtual reality content; and a streaming unit configured to separately stream the image of the dynamic object and the image of the static object to the user device.
    Type: Grant
    Filed: August 13, 2019
    Date of Patent: October 3, 2023
    Assignee: KT CORPORATION
    Inventors: Kang Tae Kim, Ki Hyuk Kim, I Gil Kim
  • Patent number: 11776504
    Abstract: Disclosed are embodiments of in-situ display monitoring and calibration systems and methods. An image acquisition system captures images of the viewing plane of the display. Captured images may then be processed to characterize various visual performance characteristics of the display. When not in use capturing images of the display, the image acquisition system can be stored in a manner that protects it from environmental hazards such as dust, dirt, precipitation, direct sunlight, etc. A calibration image in which a plurality of light emitting elements is set to a particular color and intensity may be displayed, an image then captured, and then a difference between what was expected and what was captured may be developed for each light emitting element. Differences between captured images and expected images may be used to create a calibration data set which then may be used to adjust the display of further images upon the display.
    Type: Grant
    Filed: December 17, 2021
    Date of Patent: October 3, 2023
    Assignee: Nanolumens Acquisition, Inc.
    Inventors: Richard C. Cope, Theodore Heske, III