3-d Or Stereo Imaging Analysis Patents (Class 382/154)
  • Patent number: 11810311
    Abstract: A system and method is disclosed having an end-to-end two-stage depth estimation deep learning framework that takes one spherical color image and estimate dense spherical depth maps. The contemplated framework may include a view synthesis (stage 1) and a multi-view stereo matching (stage 2). The combination of the two-stage process may provide the advantage of the geometric constraints from stereo matching to improve depth map quality, without the need of additional input data. It is also contemplated that a spherical warping layer may be used to integrate multiple spherical features volumes to one cost volume with uniformly sampled inverse depth for the multi-view spherical stereo matching stage. The two-stage spherical depth estimation system and method may be used in various applications including virtual reality, autonomous driving and robotics.
    Type: Grant
    Filed: October 31, 2020
    Date of Patent: November 7, 2023
    Assignee: Robert Bosch GMBH
    Inventors: Zhixin Yan, Liu Ren, Yuyan Li, Ye Duan
  • Patent number: 11810304
    Abstract: Depth information from a depth sensor, such as a LiDAR system, is used to correct perspective distortion for decoding an optical pattern in a first image acquired by a camera. Image data from the first image is spatially correlated with the depth information. The depth information is used to identify a surface in the scene and to distort the first image to generate a second image, such that the surface in the second image is parallel to an image plane of the second image. The second image is then analyzed to decode an optical pattern on the surface identified in the scene.
    Type: Grant
    Filed: July 12, 2022
    Date of Patent: November 7, 2023
    Assignee: Scandit AG
    Inventors: Matthias Bloch, Christian Floerkemeier, Bernd Schoner
  • Patent number: 11810308
    Abstract: Due to the factors such as lens distortion and camera misalignment, stereoscopic image pairs often contain vertical disparities. Introduced herein is a method and apparatus that determine and correct vertical disparities in stereoscopic image pairs using an optical flow map. Instead of discarding vertical motion vectors of the optical flow map, the introduced concept extracts and analyzes the vertical motion vectors from the optical flow map and vertically aligns the images using the vertical disparity determined from the vertical motion vectors. The introduced concept recognizes that although not apparent, vertical motion does exist in stereoscopic images and can be used to correct the vertical disparity in stereoscopic images.
    Type: Grant
    Filed: February 24, 2021
    Date of Patent: November 7, 2023
    Assignee: NVIDIA Corporation
    Inventor: David Cook
  • Patent number: 11801630
    Abstract: A tubular structure fabricated by additive manufacturing from non-biological building material formulations, and featuring an elongated core, a shell encapsulating the core and an intermediate shell between the core and the shell. Each of the core, the shell and the intermediate shell is made of a different material or a different combination of materials. Both the core and the intermediate shell are sacrificial. Additive manufacturing of the tubular structure is usable for fabricating an object featuring properties of a blood vessel.
    Type: Grant
    Filed: July 27, 2018
    Date of Patent: October 31, 2023
    Assignee: Stratasys Ltd.
    Inventors: Daniel Dikovsky, Amit Feffer
  • Patent number: 11801699
    Abstract: Example implementations relate to emulating 3D texture patterns in a printer system. One example implementation receives an image having a number of image pixels and selects from a plurality of digital substrates which each correspond to a physical substrate. Each digital substrate has luminance change data corresponding to heights of a 3D texture pattern at respective locations of the physical substrate. An image having emulated 3D texture is generated by adjusting the luminance of the image pixels corresponding to respective locations of the physical substrate and according to the corresponding luminance change data.
    Type: Grant
    Filed: March 19, 2019
    Date of Patent: October 31, 2023
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventor: Gideon Amir
  • Patent number: 11798299
    Abstract: Disclosed are systems and methods for generating data sets for training deep learning networks for key point annotations and measurements extraction from photos taken using a mobile device camera. The method includes the steps of receiving a 3D scan model of a 3D object or subject captured from a 3D scanner and a 2D photograph of the same 3D object or subject at a virtual workspace. The 3D scan model is rigged with one or more key points. A superimposed image of a pose-adjusted and aligned 3D scan model superimposed over the 2D photograph is captured by a virtual camera in the virtual workspace. Training data for a key point annotation DLN is generated by repeating the steps for a plurality of objects belonging to a plurality of object categories. The key point annotation DLN learns from the training data to produce key point annotations of objects from 2D photographs captured using any mobile device camera.
    Type: Grant
    Filed: November 2, 2020
    Date of Patent: October 24, 2023
    Assignee: Bodygram, Inc.
    Inventors: Kyohei Kamiyama, Chong Jin Koh
  • Patent number: 11793402
    Abstract: A system for generating a 3D model of a surgical site includes a 3D endoscope and a computing device coupled to the 3D endoscope. The 3D endoscope includes a scanner for scanning a surface of a surgical site and a camera source for generating images of the surgical site. A 3D model of the surgical site, including objects therein, is generated using scan data and image data. The 3D model is updated by detecting a change in the surgical site, isolating a region of the surgical site where the change is detected, generating second scan data by scanning the surface of the isolated region, and updating the 3D model generated using the second scan data of the surface of the isolated region.
    Type: Grant
    Filed: May 24, 2021
    Date of Patent: October 24, 2023
    Assignee: COVIDIEN LP
    Inventor: John W. Komp
  • Patent number: 11793280
    Abstract: The method for determining at least one morphological data item of a wriststrap wearer in order to prepare (E3) a wristwatch strap taking into account this item includes: acquiring (E1) at least one morphological data item (1; 2) of the wriststrap wearer from an acquisition device of a determination system (11) provided with at least one sensor, such as a camera, a laser or a lens, configured to scan, photograph, or film at least a portion of the wrist and/or hand of the wearer, wherein the determination system (11) is designed to receive and optionally emit waves to acquire at least one morphological characteristic of the wearer of the bracelet in a first operation (O11), the at least one morphological data item being a dimension (1) from among a circumference and/or height and/or width of a wrist portion (p1) and/or of a portion of the hand (p2) of the wearer.
    Type: Grant
    Filed: November 25, 2019
    Date of Patent: October 24, 2023
    Assignee: ROLEX SA
    Inventors: Clément Grozel, Franck Haegy, Julien Jaffré
  • Patent number: 11797132
    Abstract: The present disclosure is related to a method and device for detecting a touch between at least part of a first object and at least part of a second object, wherein the at least part of the first object has a different temperature than the at least part of the second object. The method includes providing at least one thermal image of a portion of the second object, determining in at least part of the at least one thermal image a pattern which is indicative of a particular value or range of temperature or a particular value or range of temperature change, and using the determined pattern for detecting a touch between the at least part of the first object and the at least part of the second object.
    Type: Grant
    Filed: December 23, 2020
    Date of Patent: October 24, 2023
    Assignee: Apple Inc.
    Inventor: Daniel Kurz
  • Patent number: 11798196
    Abstract: A system comprises an encoder configured to compress attribute information and/or spatial information for three-dimensional (3D) visual volumetric content and/or a decoder configured to decompress compressed attribute and/or spatial information for the 3D visual volumetric content. The encoder is configured to convert 3D visual volumetric content, such as a point cloud or mesh, into image based patch representations. The encoder is further configured to select one or more reference patches for copying or prediction, such that metadata for copying or predicting a patch based on the reference patch is signaled without explicitly signaling a full set of information for the copied or predicted patch. Likewise, a decoder is configured to receive such information and reconstruct a 3D version of the 3D visual volumetric content using both signaled and predicted or copied patches.
    Type: Grant
    Filed: January 8, 2021
    Date of Patent: October 24, 2023
    Assignee: Apple Inc.
    Inventors: Jungsun Kim, Khaled Mammou, Alexandros Tourapis
  • Patent number: 11797863
    Abstract: A method of generating synthetic images of virtual scenes includes: placing, by a synthetic data generator implemented by a processor and memory, three-dimensional (3-D) models of objects in a 3-D virtual scene; adding, by the synthetic data generator, lighting to the 3-D virtual scene, the lighting including one or more illumination sources; applying, by the synthetic data generator, imaging modality-specific materials to the 3-D models of objects in the 3-D virtual scene in accordance with a selected multimodal imaging modality, each of the imaging modality-specific materials including an empirical model; setting a scene background in accordance with the selected multimodal imaging modality; and rendering, by the synthetic data generator, a two-dimensional image of the 3-D virtual scene based on the selected multimodal imaging modality to generate a synthetic image in accordance with the selected multimodal imaging modality.
    Type: Grant
    Filed: January 4, 2021
    Date of Patent: October 24, 2023
    Assignee: Intrinsic Innovation LLC
    Inventors: Kartik Venkataraman, Agastya Kalra, Achuta Kadambi
  • Patent number: 11798188
    Abstract: The present disclosure provides a method of remotely measuring a size of a pupil, including: acquiring an image of a to-be-measured person by using a detection device; acquiring an image of a pupil of the to-be-measured person from the image of the to-be-measured person; measuring a distance between the to-be-measured person and the detection device by using the detection device; and calculating an actual size of the pupil of the to-be-measured person based on the measured distance and the image of the pupil of the to-be-measured person. The present disclosure further provides an apparatus for remotely measuring a size of a pupil, an electronic device, and a non-transitory computer-readable medium.
    Type: Grant
    Filed: February 24, 2021
    Date of Patent: October 24, 2023
    Assignees: Tsinghua University, Nuctech Company Limited
    Inventors: Zhiqiang Chen, Yuanjing Li, Jianmin Li, Xianghao Wu, Bin Liu, Guocheng An
  • Patent number: 11790547
    Abstract: An information processing apparatus includes management circuitry that manages three-dimensional data obtained by a scanner in the order of obtaining, display controller circuitry that controls a display to show a three-dimensional image generated from combined data generated by combining a plurality of pieces of three-dimensional data, and input circuitry that accepts an operation by a user. When the input circuitry accepts an operation to select three-dimensional data, the management circuitry sets, from the selected three-dimensional data of last obtained three-dimensional data, data to be deleted. The display controller circuitry controls the display to show a three-dimensional image modified to exclude the data to be deleted.
    Type: Grant
    Filed: January 14, 2021
    Date of Patent: October 17, 2023
    Assignee: J. MORITA MFG. CORP.
    Inventors: Keisuke Sorimoto, Masayuki Sano
  • Patent number: 11783482
    Abstract: The disclosure is related to a panoramic radiography device. The panoramic radiography device may include an image processor and a viewer module. The image processor may be configured to produce a primary panoramic image using a first image layer and a secondary panoramic image using a secondary image layer based on a plurality of image frame data, wherein the second image layer is different from the first image layer in at least one of a number, a position, a shape, an angle, and a thickness. The viewer module may be configured to i) provide a graphic user interface having a primary display area and a secondary display area arranged at a predetermined position of the primary display area, ii) display the primary panoramic image at the primary display area, and iii) display a part of the secondary panoramic image at the secondary display area, wherein the part of the secondary panoramic image corresponds to the predetermined position.
    Type: Grant
    Filed: May 31, 2022
    Date of Patent: October 10, 2023
    Assignees: VATECH Co., Ltd., VATECH EWOO Holdings Co., Ltd.
    Inventor: Sung Il Choi
  • Patent number: 11783506
    Abstract: A method determining an angle of a trailer relative to a vehicle. The method includes generating a projection on a trailer with a projector. An image is obtained of the projection on the trailer with a camera. An angle of the trailer relative to a vehicle is determined by comparing the image of the projection with a known pattern of the projection.
    Type: Grant
    Filed: December 22, 2020
    Date of Patent: October 10, 2023
    Assignee: Continental Autonomous Mobility US, LLC
    Inventors: Julien Ip, Kyle Patrick Carpenter, Xin Yu
  • Patent number: 11783534
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media which retarget 2D screencast video tutorials into an active VR host application. VR-embedded widgets can render on top of a VR host application environment while the VR host application is active. Thus, VR-embedded widgets can provide various interactive tutorial interfaces directly inside the environment of the VR host application. For example, VR-embedded widgets can present external video content, related information, and corresponding interfaces directly in a VR painting environment, so a user can simultaneously access external video (e.g., screencast video tutorials) and a VR painting. Possible VR-embedded widgets include a VR-embedded video player overlay widget, a perspective thumbnail overlay widget (e.g., a user-view thumbnail overlay, an instructor-view thumbnail overlay, etc.), an awareness overlay widget, a tutorial steps overlay widget, and/or a controller overlay widget, among others.
    Type: Grant
    Filed: May 17, 2021
    Date of Patent: October 10, 2023
    Assignee: Adobe Inc.
    Inventors: Cuong Nguyen, Stephen Joseph DiVerdi, Balasaravanan Thoravi Kumaravel
  • Patent number: 11774983
    Abstract: The described positional awareness techniques employing sensory data gathering and analysis hardware with reference to specific example implementations implement improvements in the use of sensors, techniques and hardware design that can enable specific embodiments to find new area to cover by a robot performing an area coverage task of an unexplored area. The sensory data are gathered from an operational camera and one or more auxiliary sensors.
    Type: Grant
    Filed: December 20, 2019
    Date of Patent: October 3, 2023
    Assignee: Trifo, Inc.
    Inventors: Zhe Zhang, Qingyu Chen, Yen-Cheng Liu, Weikai Li
  • Patent number: 11770551
    Abstract: A method includes receiving a video comprising images representing an object, and determining, using a machine learning model, based on a first image of the images, and for each respective vertex of vertices of a bounding volume for the object, first two-dimensional (2D) coordinates of the respective vertex. The method also includes tracking, from the first image to a second image of the images, a position of each respective vertex along a plane underlying the bounding volume, and determining, for each respective vertex, second 2D coordinates of the respective vertex based on the position of the respective vertex along the plane. The method further includes determining, for each respective vertex, (i) first three-dimensional (3D) coordinates of the respective vertex based on the first 2D coordinates and (ii) second 3D coordinates of the respective vertex based on the second 2D coordinates.
    Type: Grant
    Filed: December 15, 2020
    Date of Patent: September 26, 2023
    Assignee: Google LLC
    Inventors: Adel Ahmadyan, Tingbo Hou, Jianing Wei, Liangkai Zhang, Artsiom Ablavatski, Matthias Grundmann
  • Patent number: 11769275
    Abstract: This method for inter-predictive encoding of a time-varying 3D point cloud including a series of successive frames divided in 3D blocks into at least one bitstream comprises encoding (20) 3D motion information including a geometric transformation comprising rotation information representative of a rotation transformation and translation information representative of a translation transformation, wherein the translation information comprises a vector ?T representing an estimation error of the translation transformation.
    Type: Grant
    Filed: October 11, 2018
    Date of Patent: September 26, 2023
    Assignee: INTERDIGITAL VC HOLDINGS, INC.
    Inventors: Sebastien Lasserre, Saurabh Puri, Kangying Cai, Julien Ricard, Celine Guede
  • Patent number: 11763432
    Abstract: The present disclosure provides a multi-exposure image fusion (MEF) method based on a feature distribution weight of a multi-exposure image, including: performing color space transformation (CST) on an image, determining a luminance distribution weight of the image, determining an exposure distribution weight of the image, determining a local gradient weight of the image, determining a final weight, and determining a fused image. The present disclosure combines the luminance distribution weight of the image, the exposure distribution weight of the image and the local gradient weight of the image to obtain the final weight, and fuses the input image and the weight with the existing pyramid-based multi-resolution fusion method to obtain the fused image, thereby solving the technical problem that an existing MEF method does not consider the overall feature distribution of the multi-exposure image.
    Type: Grant
    Filed: February 24, 2022
    Date of Patent: September 19, 2023
    Assignee: XI'AN UNIVERSITY OF POSTS & TELECOMMUNICATIONS
    Inventors: Weihua Liu, Biyan Ma, Ying Liu, Yanchao Gong, Fuping Wang
  • Patent number: 11763563
    Abstract: A system for monitoring and recording and processing an activity includes one or more cameras for automatically recording video of the activity. A processor and memory associated and in communication with the camera is disposed near the location of the activity. The system may include AI logic configured to identify a user recorded within a video frame captured by the camera. The system may also detect and identify a user when the user is located within a predetermined area. The system may include a video processing engine configured to process images within the video frame to identify the user and may modify and format the video upon identifying the user and the activity. The system may include a communication module to communicate formatted video to a remote video processing system, which may further process the video and enable access to a mobile app of the user.
    Type: Grant
    Filed: February 25, 2022
    Date of Patent: September 19, 2023
    Assignee: Hole-In-One Media, Inc.
    Inventor: Kevin R. Imes
  • Patent number: 11762957
    Abstract: An RGB-D fusion information-based obstacle target classification method includes: collecting an original image through a binocular camera within a target range, and acquiring a disparity map of the original image; collecting a color-calibrated RGB image through a reference camera of the binocular camera within the target range; acquiring an obstacle target through disparity clustering in accordance with the disparity map and the color-calibrated RGB image, and acquiring a target disparity map and a target RGB image of the obstacle target; calculating depth information about the obstacle target in accordance with the target disparity map; and acquiring a classification result of the obstacle target through RGB-D channel information fusion in accordance with the depth information and the target RGB image.
    Type: Grant
    Filed: August 12, 2021
    Date of Patent: September 19, 2023
    Assignee: Beijing Smarter Eye Technology Co. Ltd.
    Inventors: Chao Yang, An Jiang, Ran Meng, Hua Chai, Feng Cui
  • Patent number: 11763365
    Abstract: There is provided, in accordance with an embodiment of the present invention, an apparatus, system, and method to provide personalized online product fitting, according to some embodiments. One embodiment of the invention includes a method for personalized shopping has the steps of an automated shopping assistant system accessing product data, a matchmaking system accessing user history data, the matchmaking system accessing user preference data, the matchmaking system accessing user anatomical data acquired from an automated shopping assistant apparatus, the automated shopping assistant system matching user history, preference and anatomical data with data to generate a personalized matching system.
    Type: Grant
    Filed: June 27, 2018
    Date of Patent: September 19, 2023
    Assignee: NIKE, Inc.
    Inventors: David Bleicher, Tamir Lousky
  • Patent number: 11763478
    Abstract: Various implementations disclosed herein include devices, systems, and methods that generate floorplans and measurements using a three-dimensional (3D) representation of a physical environment generated based on sensor data.
    Type: Grant
    Filed: January 15, 2021
    Date of Patent: September 19, 2023
    Assignee: Apple Inc.
    Inventors: Yang Yang, Boyuan Sun, Afshin Dehghan, Feng Tang, Bin Liu, Fengfu Li
  • Patent number: 11763579
    Abstract: A method, computer system and computer-readable medium for determining a surface pattern for a target object using an evolutionary algorithm such as a genetic algorithm, a parameterized texture-generating function, a 3D renderer for rendering images of a 3D model of the target object with a texture obtained from the parameterized texture generating function, and an object recognition model to process the images and predict whether or not the image contains an object of the target object's type or category. Sets of parameters are generated using the evolutionary algorithm and the accuracy of the object recognition model's prediction of the images with the 3D model textured according to each set of parameters is used to determine a fitness score, by which sets of parameters are scored for the purpose of obtaining future further generations of sets of parameters, such as by genetic algorithm operations such as mutation and crossover operations.
    Type: Grant
    Filed: May 7, 2020
    Date of Patent: September 19, 2023
    Assignee: The Secretary of State for Defence
    Inventor: Geraint Johnson
  • Patent number: 11763492
    Abstract: In some embodiments, a method includes receiving a first image and a second image from a stereo camera pair. The method includes selecting a first row of pixels from the rectified image and a set of rows of pixels from the second image and comparing the first row of pixels with each row of pixels from the set of rows of pixels to determine disparity values. The method includes determining a pair of rows of pixels having the first row of pixels and a second row of pixels from the set of rows of pixels. The pair of rows of pixels has an offset no greater than an offset between the first row of pixels and each row of pixels from remaining rows of pixels. The method includes adjusting, based on the offset, the relative rotational position between the first stereo camera and the second stereo camera.
    Type: Grant
    Filed: June 10, 2022
    Date of Patent: September 19, 2023
    Assignee: PlusAI, Inc.
    Inventors: Anurag Ganguli, Timothy P. Daly, Jr., Mayank Gupta, Wenbin Wang, Huan Yang Chang
  • Patent number: 11763570
    Abstract: An obstacle detection device includes a stereo camera that captures IR images on the right side and left side; a point cloud data creation unit that creates point cloud data having three-dimensional position information from the IR images captured by the stereo camera; an obstacle detection unit that detects an obstacle according to point cloud data; an invalid area identification unit that identifies whether an invalid area with no point cloud is present in point cloud data; and a target recognition unit that recognizes a particular target in the invalid area with no point cloud according to an IR image. When a particular target is recognized by the target recognition unit, the obstacle detection unit does not determine the invalid area with no point cloud to be an obstacle.
    Type: Grant
    Filed: June 23, 2021
    Date of Patent: September 19, 2023
    Assignee: Alps Alpine Co., LTD.
    Inventors: Kaoru Mashiko, Tomoki Nakagawa, Tatsuya Matsui
  • Patent number: 11763469
    Abstract: A registration technique is provided that can combine one or more related registrations to enhance accuracy of a registration of image volumes. A registration relationship between a first source volume and a target volume and a registration relationship between the first source volume and a second source volume are concatenated to provide an estimate of a registration relationship between the second source volume and the target volume. The estimate is utilized to inform the direct registration of the second source volume to the target volume or utilized in place of the direct registration.
    Type: Grant
    Filed: June 4, 2019
    Date of Patent: September 19, 2023
    Assignee: MIM Software Inc.
    Inventor: Jonathan William Piper
  • Patent number: 11750943
    Abstract: Provided is a method and a device for correcting lateral chromatic aberration, a storage medium and a computer equipment. In the method, a relationship model between lens position and magnitude of LCA is constructed based on preset parameters of lens positions, and the relationship model is stored as calibration data; system parameters of a camera to be corrected and pre-stored calibration data are obtained; the LCA of the camera to be corrected is obtained by calculating the system parameters; and the LCA is corrected by the calibration data. With the method, the LCA of the lens can be removed when the focus distance is changed, and the method is suitable for mass-production.
    Type: Grant
    Filed: October 20, 2021
    Date of Patent: September 5, 2023
    Assignee: AAC Optics Solutions Pte. Ltd.
    Inventors: Vida Fakour Sevom, Dmytro Paliy, Juuso Gren
  • Patent number: 11748947
    Abstract: A display method is a display method performed by a display device that operates in conjunction with a mobile object, and includes: determining which one of first surrounding information, which is video showing a surrounding condition of the mobile object and is generated using two-dimensional information, and second surrounding information, which is video showing the surrounding condition of the mobile object and is generated using three-dimensional data, is to be displayed, based on a driving condition of the mobile object; and displaying the one of the first surrounding information and the second surrounding information that is determined to be displayed.
    Type: Grant
    Filed: December 17, 2021
    Date of Patent: September 5, 2023
    Assignee: PANASONIC INTELLECTUAL PROPERTY CORPORATION OF AMERICA
    Inventors: Tatsuya Koyama, Takahiro Nishi, Toshiyasu Sugio, Tadamasa Toma, Satoshi Yoshikawa, Toru Matsunobu
  • Patent number: 11743585
    Abstract: An electronic apparatus is provided. The electronic apparatus includes at least one camera module including at least one lens, a first sensor configured to detect a motion of the electronic apparatus, and at least one processor configured to perform a first focusing operation of determining a target position of the at least one lens by focusing processing on a subject and moving the at least one lens to the target position, and perform a second focusing operation of, according to a determination that a first condition that a photographing distance, which is a distance to the subject, is less than a distance reference value and a depth-of-field value is less than or equal to a depth reference value is satisfied, additionally driving the at least one lens based on a calculated focusing correction value for compensating a motion.
    Type: Grant
    Filed: March 31, 2022
    Date of Patent: August 29, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Kioh Jung, Dongyoul Park, Soonkyoung Choi
  • Patent number: 11742962
    Abstract: A method of monitoring an antenna array comprises generating first image data at a first time. The first image data is reproducible as a first image of an antenna array unit. The method further comprises generating second image data at a second time. The second image data is reproducible as a second image of the antenna array unit. The method further comprises comparing the first image data and the second image data. The method further comprises transmitting an alert that is indicative of the presence of at least one with the antenna array unit at the second time. The alert is transmitted in response to the comparison between the first image data and the second image data indicating the presence of the at least one fault with the antenna array unit at the second time.
    Type: Grant
    Filed: September 13, 2021
    Date of Patent: August 29, 2023
    Assignee: Quanta Computer Inc.
    Inventor: Chi-Sen Yen
  • Patent number: 11741714
    Abstract: A system for monitoring and recording and processing an activity includes one or more cameras for automatically recording video of the activity. A processor and memory associated and in communication with the camera is disposed near the location of the activity. The system may include AI logic configured to identify a user recorded within a video frame captured by the camera. The system may also detect and identify a user when the user is located within a predetermined area. The system may include a video processing engine configured to process images within the video frame to identify the user and may modify and format the video upon identifying the user and the activity. The system may include a communication module to communicate formatted video to a remote video processing system, which may further process the video and enable access to a mobile app of the user.
    Type: Grant
    Filed: August 12, 2022
    Date of Patent: August 29, 2023
    Assignee: HOLE-IN-ONE MEDIA, INC.
    Inventor: Kevin R. Imes
  • Patent number: 11740358
    Abstract: A method of and system for processing Light Detection and Ranging (LIDAR) point cloud data. The method is executable by an electronic device, communicatively coupled to a LIDAR installed on a vehicle, the LIDAR having a plurality of lasers for capturing LIDAR point cloud data. The method includes receiving a first LIDAR point cloud data captured by the LIDAR; executing a Machine Learning Algorithm (MLA) for: analyzing a first plurality of LIDAR points of the first point cloud data in relation to a response pattern of the plurality of lasers; retrieving a grid representation data of a surrounding area of the vehicle; determining if the first plurality of LIDAR points is associated with a blind spot, the blind spot preventing a detection algorithm of the electronic device to detect presence of at least one object surrounding the vehicle conditioned on the at least one object is present.
    Type: Grant
    Filed: July 15, 2020
    Date of Patent: August 29, 2023
    Assignee: YANDEX SELF DRIVING GROUP LLC
    Inventors: Boris Konstantinovich Yangel, Maksim Ilich Stebelev
  • Patent number: 11743994
    Abstract: A load control system may be configured using a graphical user interface software. The graphical user interface software may display a first icon and a second icon. The first icon may represent a first electrical device and the second icon may represent a second electrical device. The first icon and the second icon may represent the relative location of the first electrical device and the second electrical device within a load control environment. The graphical user interface software may display a line from a selected icon (e.g., first icon) to a cursor. The graphical user interface software may adjust the line from the selected icon, for example, as the cursor moves. The graphical user interface software may define and store an association between the first electrical device and a second electrical device, for example, in response to the user selecting the first icon and the second icon.
    Type: Grant
    Filed: November 30, 2020
    Date of Patent: August 29, 2023
    Assignee: Lutron Technology Company LLC
    Inventors: Ritika Arora, Manisha Dahiya Baluja, John N. Callen, Erica L. Clymer, Sanjeev Kumar, Mark Law, Sandeep Mudabail Raghuram, Anurag Singh, Christopher Spencer
  • Patent number: 11740689
    Abstract: An electronic device may include an ambient light sensor that measures ambient light color, a projector that projects ambient-light-matching illumination onto a surface, a user input device such as a microphone that gathers user input, and a position sensor that measures a position of the surface, a user, and/or a real-world object relative to the device. The ambient-light-matching illumination may create illuminated regions on the surface that blend in with the surrounding ambient light. Certain pixels in the projector may be turned off to create one or more unilluminated regions within the illuminated regions. The unilluminated regions may form apparent shadows. Control circuitry in the electronic device may adjust characteristics of the unilluminated regions by dynamically adjusting which pixels are turned off based on voice input, gesture input, and/or other sensor data.
    Type: Grant
    Filed: June 16, 2022
    Date of Patent: August 29, 2023
    Assignee: Apple Inc.
    Inventors: Christopher J Verplaetse, Clark D Della Silva
  • Patent number: 11736676
    Abstract: An imaging apparatus including an imaging lens, and an image sensor array of first and second image sensor units, wherein a single first image sensor unit includes a single first microlens and a plurality of image sensors, a single second image sensor unit includes a single second microlens and a single image sensor, light passing through the imaging lens and reaching each first image sensor unit passes through the first microlens and forms an image on the image sensors constituting the first image sensor unit, light passing through the imaging lens and reaching each second image sensor unit passes through the second microlens and forms an image on the image sensor constituting the second image sensor unit, an inter-unit light shielding layer is formed between the image sensor units, and a light shielding layer is not formed between the image sensor units constituting the first image sensor unit.
    Type: Grant
    Filed: July 21, 2021
    Date of Patent: August 22, 2023
    Assignee: SONY SEMICONDUCTOR SOLUTIONS CORPORATION
    Inventor: Tomohiro Yamazaki
  • Patent number: 11731692
    Abstract: A driving support device performs steering control and deceleration control for avoiding an object which is detected in front of a host vehicle. The driving support device performs: calculating a target lateral distance which is a target of the steering control and which is a lateral distance between the host vehicle and the object when the host vehicle passes by the object; and increasing target deceleration which is a target of the deceleration control and which is deceleration of the host vehicle when the host vehicle passes by the object as a lateral distance restraint value which is obtained by subtracting the target lateral distance from a threshold value increases when the target lateral distance is less than the threshold value.
    Type: Grant
    Filed: May 27, 2020
    Date of Patent: August 22, 2023
    Assignee: TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Toshiki Kinoshita, Shintaro Inoue
  • Patent number: 11734850
    Abstract: On-floor obstacle detection using an RGB-D camera is disclosed. An obstacle on a floor is detected by receiving an image including depth channel data and RGB channel data through the RGB-D camera, estimating a ground plane corresponding to the floor based on the depth channel data, obtaining a foreground of the image corresponding to the ground plane based on the depth channel data, performing a distribution modeling on the foreground of the image based on the RGB channel data to obtain a 2D location of the obstacle, and transforming the 2D location of the obstacle into a 3D location of the obstacle based on the depth channel data.
    Type: Grant
    Filed: April 26, 2021
    Date of Patent: August 22, 2023
    Assignees: UBTECH NORTH AMERICA RESEARCH AND DEVELOPMENT CENTER CORP, UBTECH ROBOTICS CORP LTD
    Inventors: Dan Shao, Dejun Guo, Zhen Xiu, Chuqiao Dong, Huan Tan
  • Patent number: 11734883
    Abstract: This specification describes systems and methods for generating a mapping of a physical space from point cloud data for the physical space. The methods can include receiving the point cloud data for the physical space, filtering the point cloud data to, at least, remove sparse points from the point cloud data, aligning the point cloud data along x, y, and z dimensions that correspond to an orientation of the physical space, and classifying the points in the point cloud data as corresponding to one or more types of physical surfaces. The methods can also include identifying specific physical structures in the physical space based, at least in part, on classifications for the points in the point cloud data, and generating the mapping of the physical space to identify the specific physical structures and corresponding contours for the specific physical structures within the orientation of the physical space.
    Type: Grant
    Filed: April 14, 2021
    Date of Patent: August 22, 2023
    Assignee: Lineage Logistics, LLC
    Inventors: Christopher Frank Eckman, Brady Michael Lowe
  • Patent number: 11734801
    Abstract: Examples are provided that relate to processing depth camera data over a distributed computing system, where phase unwrapping is performed prior to denoising. One example provides a time-of-flight camera comprising a time-of-flight depth image sensor, a logic machine, a communication subsystem, and a storage machine holding instructions executable by the logic machine to process time-of-flight image data acquired by the time-of-flight depth image sensor by, prior to denoising, performing phase unwrapping pixel-wise on the time-of-flight image data to obtain coarse depth image data comprising depth values; and send the coarse depth image data and active brightness image data to a remote computing system via the communication subsystem for denoising.
    Type: Grant
    Filed: May 28, 2021
    Date of Patent: August 22, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventor: Sergio Ortiz Egea
  • Patent number: 11736668
    Abstract: A photo filter (e.g., artistic) light field effect system comprises an eyewear device that includes a frame, a temple connected to a lateral side of the frame, and a depth-capturing camera. Execution of programming by a processor configures the photo filter light field effect system to apply a photo filter selection to: (i) a left raw image or a left processed image to create a left photo filter image, and (ii) a right raw image or a right processed image to create a right photo filter image. The photo filter light field effect system generates, a photo filter light field effect image with an appearance of a spatial rotation or movement, by blending together the left photo filter image and the right photo filter image based on a left image disparity map and a right image disparity map.
    Type: Grant
    Filed: October 11, 2021
    Date of Patent: August 22, 2023
    Assignee: Snap Inc.
    Inventor: Sagi Katz
  • Patent number: 11727642
    Abstract: Reduction of a work burden relating to generation of a virtual viewpoint image is implemented. An image processing apparatus includes a virtual viewpoint image generation section that generates, on the basis of three-dimensional information that represents an imaged imaging object in a three-dimensional space, an observation image from a viewpoint in the three-dimensional space as a virtual viewpoint image, and the virtual viewpoint image generation section sets the viewpoint that follows movement of the imaging object. This makes it possible to reduce an operation burden relating to setting of a viewpoint.
    Type: Grant
    Filed: May 25, 2018
    Date of Patent: August 15, 2023
    Assignee: SONY CORPORATION
    Inventors: Yuta Nakao, Nobuho Ikeda, Hiroshi Ikeda
  • Patent number: 11726209
    Abstract: A system and a method for removing artifacts from a 3D coordinate data are provided. The system includes one or more processors and a measuring device. The one or more processors are operable to receive training data and train the 3D measuring device to identify artifacts by analyzing the training data. The one or more processors are further operable to identify artifacts in live data based on the training of the processor system. The one or more processors are further operable to generate clear scan data by filtering the artifacts from the live data and output the clear scan data.
    Type: Grant
    Filed: June 5, 2020
    Date of Patent: August 15, 2023
    Assignee: FARO TECHNOLOGIES, INC.
    Inventors: Louis Bergmann, Vadim Demkiv, Daniel Flohr
  • Patent number: 11727591
    Abstract: A method and apparatus with image depth estimation are provided. The method includes obtaining a first statistical value associated with a depth for each of plural pixels included in an input image based on a first channel of output data obtained by applying the input image to a neural network, obtaining a second statistical value associated with a depth for each of the plural pixels in the input image based on a second channel of the output data, and estimating depth information of each of the plural pixels in the input image based on the first statistical value and the second statistical value. The neural network may be trained based on a probability distribution for a depth of each pixel in an image based on a first statistical value and a second statistical value that are obtained corresponding to an image with predetermined depth information in the training data.
    Type: Grant
    Filed: December 9, 2020
    Date of Patent: August 15, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Seungeon Kim, Hyunsung Chang
  • Patent number: 11727577
    Abstract: Implementations described herein relate to methods, systems, and computer-readable media to render a foreground video. In some implementations, a method includes receiving a plurality of video frames that include depth data and color data. The method further includes downsampling the frames of the video. The method further includes, for each frame, generating an initial segmentation mask that categorizes each pixel of the frame as foreground pixel or background pixel. The method further includes determining a trimap that classifies each pixel of the frame as known background, known foreground, or unknown. The method further includes, for each pixel that is classified as unknown, calculating and storing a weight in a weight map. The method further includes performing fine segmentation to obtain a binary mask for each frame. The method further includes upsampling the plurality of frames based on the binary mask for each frame to obtain a foreground video.
    Type: Grant
    Filed: November 9, 2021
    Date of Patent: August 15, 2023
    Assignee: Google LLC
    Inventors: Guangyu Zhou, Qiang Chen, Niklas Enbom
  • Patent number: 11721076
    Abstract: The system generates real-time augmented reality video for TV broadcast, cinema or video games. The system includes a monoscopic video camera including a body, a stereoscopic video camera, and a processor. The system includes sensors, including multiple non-optical sensors, which provide real-time positioning data defining the 3D position and 3D orientation of the monoscopic video camera, or enable the 3D position and 3D orientation of the monoscopic video camera to be calculated. The processor is configured to use the real-time positioning data automatically to create, recall, render or modify computer generated 3D objects. The processor is configured to determine the 3D position and orientation of the monoscopic video camera with reference to a 3D map of the real-world generated whilst the camera is being used to capture video. The processor is configured to track the scene without a requirement for an initial or prior survey of the scene.
    Type: Grant
    Filed: November 22, 2021
    Date of Patent: August 8, 2023
    Assignee: NCAM TECHNOLOGIES LIMITED
    Inventors: Samuel Boivin, Brice Michoud
  • Patent number: 11721037
    Abstract: The present application relates to a method for indoor positioning, and an apparatus, an electronic device and a storage medium, which relates to the fields of positioning technologies and deep learning technologies. A specific implementation solution is: extracting descriptors of structure lines in a specified image collected by a terminal indoors; based on the descriptors of the structure lines in the specified image and descriptors of structure lines of images included in a pre-established indoor image database, acquiring, from the indoor image database, target structure line descriptors closest to the descriptors of the structure lines in the specified image; and acquiring pose information of the terminal based on a pre-established indoor 3D structure line map and the target structure line descriptors corresponding to the descriptors of the structure lines in the specified image.
    Type: Grant
    Filed: March 15, 2021
    Date of Patent: August 8, 2023
    Assignee: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.
    Inventor: Zhaohu Li
  • Patent number: 11714921
    Abstract: Provided are an image processing method, an image matching method, a device, and a storage medium. The image processing method includes: obtaining an image feature of an input image; determining a plurality of local image features of the image feature; determining a plurality of local feature vectors corresponding to the plurality of local image features respectively; determining a hash code of the input image based on the plurality of local feature vectors.
    Type: Grant
    Filed: June 30, 2020
    Date of Patent: August 1, 2023
    Assignees: BOE TECHNOLOGY GROUP CO., LTD., PEKING UNIVERSITY
    Inventors: Yigeng Fang, Xiaojun Tang, Yadong Mu
  • Patent number: 11714618
    Abstract: A method for operating on a target function to provide computer code instructions configured to implement automatic adjoint differentiation of the target function. The method comprises: determining, based on the target function, a linearized computational map (100), LCM, of the target function wherein each node of the LCM (100) comprises an elementary operation; for each node of the LCM (100) forming computer code instructions configured to: (i) compute intermediate data associated with a forward function of an automatic adjoint differentiation algorithm; and, (ii) increment, according to the automatic adjoint differentiation algorithm, adjoint variables of the preceding connected nodes of the each node in dependence on intermediate data; wherein forming computer code instructions for both step (i) and step (ii) for each node is performed prior to performing said steps for a subsequent node of the LCM (100).
    Type: Grant
    Filed: July 8, 2020
    Date of Patent: August 1, 2023
    Inventor: Dmitri Goloubentsev