3-d Or Stereo Imaging Analysis Patents (Class 382/154)
-
Patent number: 11830211Abstract: Embodiments of the disclosure provide a disparity map acquisition method and apparatus, a device, a control system and a storage medium. The method includes: respectively performing feature extraction on left-view images and right-view images of a captured object layer by layer through M cascaded feature extraction layers, to obtain a left-view feature map set and a right-view feature map set of each layer, M being a positive integer greater than or equal to 2; constructing an initial disparity map based on the left-view feature map set and the right-view feature map set extracted by an Mth feature extraction layer; and iteratively refining, starting from an (M?1)th layer, the disparity map through the left-view feature map set and the right-view feature map set extracted by each feature extraction layer in sequence until a final disparity map is obtained based on an iteratively refined disparity map of a first layer.Type: GrantFiled: April 22, 2021Date of Patent: November 28, 2023Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventor: Ze Qun Jie
-
Patent number: 11830151Abstract: Disclosed is an approach for managing and displaying virtual content in a mixed reality environment on a one-on-one basis independently by each application, each virtual content is rendered by its respective application into a bounded volume referred herein as a “Prism.” Each Prism may have characteristics and properties that allow a universe application to manage and display the Prism in the mixed reality environment such that the universe application may manage the placement and display of the virtual content in the mixed reality environment by managing the Prism itself.Type: GrantFiled: April 27, 2021Date of Patent: November 28, 2023Assignee: Magic Leap, Inc.Inventors: June Tate-Gans, Eric Norman Yiskis, Mark Ashley Rushton, David William Hover, Praveen Babu J D
-
Patent number: 11826111Abstract: A method of tracking motion of a body part, the method comprising: (a) gathering motion data from a body part repositioned within a range of motion, the body part having mounted thereto a motion sensor; (b) gathering a plurality of radiographic images taken of the body part while the body part is in different positions within the range of motion, the plurality of radiographic images having the body part and the motion sensor within a field of view; and, (c) constructing a virtual three dimensional model of the body part from the plurality of radiographic images using a structure of the motion sensor identifiable within at least two of the plurality of radiographic images to calibrate the radiographic images.Type: GrantFiled: April 12, 2022Date of Patent: November 28, 2023Assignee: TECHMAH MEDICAL LLCInventor: Mohamed R. Mahfouz
-
Patent number: 11830141Abstract: In an embodiment, a 3D facial modeling system includes a plurality of cameras configured to capture images from different viewpoints, a processor, and a memory containing a 3D facial modeling application and parameters defining a face detector, wherein the 3D facial modeling application directs the processor to obtain a plurality of images of a face captured from different viewpoints using the plurality of cameras, locate a face within each of the plurality of images using the face detector, wherein the face detector labels key feature points on the located face within each of the plurality of images, determine disparity between corresponding key feature points of located faces within the plurality of images, and generate a 3D model of the face using the depth of the key feature points.Type: GrantFiled: February 22, 2022Date of Patent: November 28, 2023Assignee: Adela Imaging LLCInventor: Kartik Venkataraman
-
Patent number: 11823402Abstract: A method and apparatus for correcting an error in depth information estimated from a two-dimensional (2D) image are disclosed. The method includes diagnosing an error in depth information by inputting a color image and depth information estimated using the color image to a depth error detection network, and determining enhanced depth information by maintaining or correcting the depth information based on the diagnosed error.Type: GrantFiled: May 3, 2021Date of Patent: November 21, 2023Assignees: Electronics and Telecommunications Research Institute, The Trustees of Indiana UniversityInventors: Soon Heung Jung, Jeongil Seo, Jagpreet Singh Chawla, Nikhil Thakurdesai, David Crandall, Md Reza, Anuj Godase
-
Patent number: 11823415Abstract: An autoencoder may be trained to predict 3D pose labels using simulation data extracted from a simulated environment, which may be configured to represent an environment in which the 3D pose estimator is to be deployed. Assets may be used to mimic the deployment environment such as 3D models or textures and parameters used to define deployment scenarios and/or conditions that the 3D pose estimator will operate under in the environment. The autoencoder may be trained to predict a segmentation image from an input image that is invariant to occlusions. Further, the autoencoder may be trained to exclude areas of the input image from the object that correspond to one or more appendages of the object. The 3D pose may be adapted to unlabeled real-world data using a GAN, which predicts whether output of the 3D pose estimator was generated from real-world data or simulated data.Type: GrantFiled: March 3, 2021Date of Patent: November 21, 2023Assignee: NVIDIA CorporationInventors: Sravya Nimmagadda, David Weikersdorfer
-
Patent number: 11823365Abstract: The present invention provides a computer-based method for automatically evaluating validity and extent of at least one damaged object from image data, comprising the steps of: (a) receive image data comprising one or more images of at least one damaged object; (b) inspect any one of said one or more images for existing image alteration utilising an image alteration detection algorithm, and remove any image comprising image alterations from said one or more images; (c) identify and classify said at least one damaged object in any one of said one or more images, utilising at least one first machine learning algorithm; (d) detect at least one damaged area of said classified damaged object, utilising at least one second machine learning algorithm; (e) classify, quantitatively and/or qualitatively, an extent of damage of said at least one damaged area, utilising at least one third machine learning algorithm, and characteristic information of said damaged object and/or an undamaged object that is at least equivaleType: GrantFiled: September 15, 2017Date of Patent: November 21, 2023Assignee: Emergent Network Intelligence Ltd.Inventors: Christopher Campbell, Karl Hewitson, Karl Brown, Jon Wilson, Sam Warren
-
Patent number: 11818303Abstract: A computer-implemented method of detecting an object depicted in a digital image includes: detecting a plurality of identifying features of the object, wherein the plurality of identifying features are located internally with respect to the object; projecting a location of region(s) of interest of the object based on the plurality of identifying features, where each region of interest depicts content; building and/or selecting an extraction model configured to extract the content based at least in part on: the location of the region(s) of interest, the of identifying feature(s), or both; and extracting the some or all of the content from the digital image using the extraction model. Corresponding system and computer program product embodiments are disclosed. The inventive concepts enable reliable extraction of data from digital images where portions of an object are obscured/missing, and/or depicted on a complex background.Type: GrantFiled: August 27, 2020Date of Patent: November 14, 2023Assignee: KOFAX, INC.Inventors: Jiyong Ma, Stephen M. Thompson, Jan W. Amtrup
-
Patent number: 11816795Abstract: The photo-video based spatial-temporal volumetric capture system more efficiently, produces high frame rate and high resolution 4D dynamic human videos, without a need for 2 separate 3D and 4D scanner systems, by combining a set of high frame rate machine vision video cameras with a set of high resolution photography cameras. It reduces a need for manual CG works, by temporally up-sampling shape and texture resolution of 4D scanned video data from a temporally sparse set of higher resolution 3D scanned keyframes that are reconstructed both by using machine vision cameras and photography cameras. Unlike typical performance capture system that uses single static template model at initialization (e.g. A or pose), the photo-video based spatial-temporal volumetric capture system stores multiple keyframes of high resolution 3D template models for robust and dynamic shape and texture refinement of 4D scanned video sequence.Type: GrantFiled: December 20, 2019Date of Patent: November 14, 2023Assignee: SONY GROUP CORPORATIONInventors: Kenji Tashiro, Chuen-Chien Lee, Qing Zhang
-
Patent number: 11816854Abstract: A three-dimensional shape of a subject is analyzed by inputting captured images of a depth camera and a visible light camera. There is provided an image processing unit configured to input captured images of the depth camera and the visible light camera, to analyze a three-dimensional shape of the subject. The image processing unit generates a depth map based TSDF space (TSDF Volume) by using a depth map acquired from a captured image of the depth camera, and generates a visible light image based TSDF space by using a captured image of the visible light camera. Moreover, an integrated TSDF space is generated by integration processing on the depth map based TSDF space and the visible light image based TSDF space, and three-dimensional shape analysis processing on the subject is executed using the integrated TSDF space.Type: GrantFiled: March 4, 2020Date of Patent: November 14, 2023Assignee: SONY GROUP CORPORATIONInventor: Hiroki Mizuno
-
Patent number: 11816829Abstract: A novel disparity computation technique is presented which comprises multiple orthogonal disparity maps, generated from approximately orthogonal decomposition feature spaces, collaboratively generating a composite disparity map. Using an approximately orthogonal feature set extracted from such feature spaces produces an approximately orthogonal set of disparity maps that can be composited together to produce a final disparity map. Various methods for dimensioning scenes and are presented. One approach extracts the top and bottom vertices of a cuboid, along with the set of lines, whose intersections define such points. It then defines a unique box from these two intersections as well as the associated lines. Orthographic projection is then attempted, to recenter the box perspective. This is followed by the extraction of the three-dimensional information that is associated with the box, and finally, the dimensions of the box are computed. The same concepts can apply to hallways, rooms, and any other object.Type: GrantFiled: December 4, 2022Date of Patent: November 14, 2023Assignee: Golden Edge Holding CorporationInventors: Tarek El Dokor, Jordan Cluster
-
Patent number: 11809526Abstract: The disclosure relates to an object identification method and device as well as a non-transitory computer-readable storage medium. The object identification method includes: acquiring a first object image, and generating a first identification result group according to the first object image, wherein the first identification result group includes one or more first identification results arranged in order of confidence from high to low; acquiring a second object image, and generating a second identification result group based on the second object image, wherein the second identification result group includes one or more second identification results arranged in order of confidence from high to low; and determining whether the first object image and the second object image correspond to the same object to be identified according to the first identification result group and the second identification result group.Type: GrantFiled: February 1, 2021Date of Patent: November 7, 2023Assignee: Hangzhou Glority Software LimitedInventors: Qingsong Xu, Qing Li
-
Patent number: 11809524Abstract: Systems and methods for training an adapter network that adapts a model pre-trained on synthetic images to real-world data are disclosed herein. A system may include a processor and a memory in communication with the processor and having machine-readable that cause the processor to output, using a neural network, a predicted scene that includes a three-dimensional bounding box having pose information of an object, generate a rendered map of the object that includes a rendered shape of the object and a rendered surface normal of the object, and train the adapter network, which adapts the predicted scene to adjust for a deformation of the input image by comparing the rendered map to the output map acting as a ground truth.Type: GrantFiled: July 23, 2021Date of Patent: November 7, 2023Assignees: Woven Planet North America, Inc., Toyota Research Institute, Inc.Inventors: Sergey Zakharov, Wadim Kehl, Vitor Guizilini, Adrien David Gaidon
-
Patent number: 11812009Abstract: An example system for generating light field content is described herein. The system includes a receiver to receive a plurality of images and a calibrator to intrinsically calibrate a camera. The system also includes a corrector and projector undistort the images and project the undistorted images to generate undistorted rectilinear images. An extrinsic calibrator may rectify and align the undistorted rectilinear images. Finally, the system includes a view interpolator to perform intermediate view interpolation on the rectified and aligned undistorted rectilinear images.Type: GrantFiled: December 26, 2018Date of Patent: November 7, 2023Assignee: Intel CorporationInventors: Fan Zhang, Oscar Nestares
-
Patent number: 11812007Abstract: An apparatus including an interface and a processor. The interface may be configured to receive pixel data. The processor may be configured to generate a reference image and a target image from the pixel data, perform disparity operations on the reference image and the target image and build a disparity map in response to the disparity operations. The disparity operations may comprise selecting a guide node from the pixel data comprising a pixel and a plurality of surrounding pixels, determining a peak location for the pixel by performing a full range search, calculating a shift offset peak location for each of the surrounding pixels by performing block matching operations in a local range near the peak location and generating values in a disparity map for the pixel data in response to the peak location for the pixel and the shift offset peak location for each of the surrounding pixels.Type: GrantFiled: January 12, 2021Date of Patent: November 7, 2023Assignee: Ambarella International LPInventors: Ke-ke Ren, Zhi He
-
Patent number: 11810311Abstract: A system and method is disclosed having an end-to-end two-stage depth estimation deep learning framework that takes one spherical color image and estimate dense spherical depth maps. The contemplated framework may include a view synthesis (stage 1) and a multi-view stereo matching (stage 2). The combination of the two-stage process may provide the advantage of the geometric constraints from stereo matching to improve depth map quality, without the need of additional input data. It is also contemplated that a spherical warping layer may be used to integrate multiple spherical features volumes to one cost volume with uniformly sampled inverse depth for the multi-view spherical stereo matching stage. The two-stage spherical depth estimation system and method may be used in various applications including virtual reality, autonomous driving and robotics.Type: GrantFiled: October 31, 2020Date of Patent: November 7, 2023Assignee: Robert Bosch GMBHInventors: Zhixin Yan, Liu Ren, Yuyan Li, Ye Duan
-
Patent number: 11810304Abstract: Depth information from a depth sensor, such as a LiDAR system, is used to correct perspective distortion for decoding an optical pattern in a first image acquired by a camera. Image data from the first image is spatially correlated with the depth information. The depth information is used to identify a surface in the scene and to distort the first image to generate a second image, such that the surface in the second image is parallel to an image plane of the second image. The second image is then analyzed to decode an optical pattern on the surface identified in the scene.Type: GrantFiled: July 12, 2022Date of Patent: November 7, 2023Assignee: Scandit AGInventors: Matthias Bloch, Christian Floerkemeier, Bernd Schoner
-
Patent number: 11810308Abstract: Due to the factors such as lens distortion and camera misalignment, stereoscopic image pairs often contain vertical disparities. Introduced herein is a method and apparatus that determine and correct vertical disparities in stereoscopic image pairs using an optical flow map. Instead of discarding vertical motion vectors of the optical flow map, the introduced concept extracts and analyzes the vertical motion vectors from the optical flow map and vertically aligns the images using the vertical disparity determined from the vertical motion vectors. The introduced concept recognizes that although not apparent, vertical motion does exist in stereoscopic images and can be used to correct the vertical disparity in stereoscopic images.Type: GrantFiled: February 24, 2021Date of Patent: November 7, 2023Assignee: NVIDIA CorporationInventor: David Cook
-
Patent number: 11801630Abstract: A tubular structure fabricated by additive manufacturing from non-biological building material formulations, and featuring an elongated core, a shell encapsulating the core and an intermediate shell between the core and the shell. Each of the core, the shell and the intermediate shell is made of a different material or a different combination of materials. Both the core and the intermediate shell are sacrificial. Additive manufacturing of the tubular structure is usable for fabricating an object featuring properties of a blood vessel.Type: GrantFiled: July 27, 2018Date of Patent: October 31, 2023Assignee: Stratasys Ltd.Inventors: Daniel Dikovsky, Amit Feffer
-
Patent number: 11801699Abstract: Example implementations relate to emulating 3D texture patterns in a printer system. One example implementation receives an image having a number of image pixels and selects from a plurality of digital substrates which each correspond to a physical substrate. Each digital substrate has luminance change data corresponding to heights of a 3D texture pattern at respective locations of the physical substrate. An image having emulated 3D texture is generated by adjusting the luminance of the image pixels corresponding to respective locations of the physical substrate and according to the corresponding luminance change data.Type: GrantFiled: March 19, 2019Date of Patent: October 31, 2023Assignee: Hewlett-Packard Development Company, L.P.Inventor: Gideon Amir
-
Patent number: 11798299Abstract: Disclosed are systems and methods for generating data sets for training deep learning networks for key point annotations and measurements extraction from photos taken using a mobile device camera. The method includes the steps of receiving a 3D scan model of a 3D object or subject captured from a 3D scanner and a 2D photograph of the same 3D object or subject at a virtual workspace. The 3D scan model is rigged with one or more key points. A superimposed image of a pose-adjusted and aligned 3D scan model superimposed over the 2D photograph is captured by a virtual camera in the virtual workspace. Training data for a key point annotation DLN is generated by repeating the steps for a plurality of objects belonging to a plurality of object categories. The key point annotation DLN learns from the training data to produce key point annotations of objects from 2D photographs captured using any mobile device camera.Type: GrantFiled: November 2, 2020Date of Patent: October 24, 2023Assignee: Bodygram, Inc.Inventors: Kyohei Kamiyama, Chong Jin Koh
-
Patent number: 11793402Abstract: A system for generating a 3D model of a surgical site includes a 3D endoscope and a computing device coupled to the 3D endoscope. The 3D endoscope includes a scanner for scanning a surface of a surgical site and a camera source for generating images of the surgical site. A 3D model of the surgical site, including objects therein, is generated using scan data and image data. The 3D model is updated by detecting a change in the surgical site, isolating a region of the surgical site where the change is detected, generating second scan data by scanning the surface of the isolated region, and updating the 3D model generated using the second scan data of the surface of the isolated region.Type: GrantFiled: May 24, 2021Date of Patent: October 24, 2023Assignee: COVIDIEN LPInventor: John W. Komp
-
Patent number: 11793280Abstract: The method for determining at least one morphological data item of a wriststrap wearer in order to prepare (E3) a wristwatch strap taking into account this item includes: acquiring (E1) at least one morphological data item (1; 2) of the wriststrap wearer from an acquisition device of a determination system (11) provided with at least one sensor, such as a camera, a laser or a lens, configured to scan, photograph, or film at least a portion of the wrist and/or hand of the wearer, wherein the determination system (11) is designed to receive and optionally emit waves to acquire at least one morphological characteristic of the wearer of the bracelet in a first operation (O11), the at least one morphological data item being a dimension (1) from among a circumference and/or height and/or width of a wrist portion (p1) and/or of a portion of the hand (p2) of the wearer.Type: GrantFiled: November 25, 2019Date of Patent: October 24, 2023Assignee: ROLEX SAInventors: Clément Grozel, Franck Haegy, Julien Jaffré
-
Patent number: 11797132Abstract: The present disclosure is related to a method and device for detecting a touch between at least part of a first object and at least part of a second object, wherein the at least part of the first object has a different temperature than the at least part of the second object. The method includes providing at least one thermal image of a portion of the second object, determining in at least part of the at least one thermal image a pattern which is indicative of a particular value or range of temperature or a particular value or range of temperature change, and using the determined pattern for detecting a touch between the at least part of the first object and the at least part of the second object.Type: GrantFiled: December 23, 2020Date of Patent: October 24, 2023Assignee: Apple Inc.Inventor: Daniel Kurz
-
Patent number: 11798196Abstract: A system comprises an encoder configured to compress attribute information and/or spatial information for three-dimensional (3D) visual volumetric content and/or a decoder configured to decompress compressed attribute and/or spatial information for the 3D visual volumetric content. The encoder is configured to convert 3D visual volumetric content, such as a point cloud or mesh, into image based patch representations. The encoder is further configured to select one or more reference patches for copying or prediction, such that metadata for copying or predicting a patch based on the reference patch is signaled without explicitly signaling a full set of information for the copied or predicted patch. Likewise, a decoder is configured to receive such information and reconstruct a 3D version of the 3D visual volumetric content using both signaled and predicted or copied patches.Type: GrantFiled: January 8, 2021Date of Patent: October 24, 2023Assignee: Apple Inc.Inventors: Jungsun Kim, Khaled Mammou, Alexandros Tourapis
-
Patent number: 11797863Abstract: A method of generating synthetic images of virtual scenes includes: placing, by a synthetic data generator implemented by a processor and memory, three-dimensional (3-D) models of objects in a 3-D virtual scene; adding, by the synthetic data generator, lighting to the 3-D virtual scene, the lighting including one or more illumination sources; applying, by the synthetic data generator, imaging modality-specific materials to the 3-D models of objects in the 3-D virtual scene in accordance with a selected multimodal imaging modality, each of the imaging modality-specific materials including an empirical model; setting a scene background in accordance with the selected multimodal imaging modality; and rendering, by the synthetic data generator, a two-dimensional image of the 3-D virtual scene based on the selected multimodal imaging modality to generate a synthetic image in accordance with the selected multimodal imaging modality.Type: GrantFiled: January 4, 2021Date of Patent: October 24, 2023Assignee: Intrinsic Innovation LLCInventors: Kartik Venkataraman, Agastya Kalra, Achuta Kadambi
-
Patent number: 11798188Abstract: The present disclosure provides a method of remotely measuring a size of a pupil, including: acquiring an image of a to-be-measured person by using a detection device; acquiring an image of a pupil of the to-be-measured person from the image of the to-be-measured person; measuring a distance between the to-be-measured person and the detection device by using the detection device; and calculating an actual size of the pupil of the to-be-measured person based on the measured distance and the image of the pupil of the to-be-measured person. The present disclosure further provides an apparatus for remotely measuring a size of a pupil, an electronic device, and a non-transitory computer-readable medium.Type: GrantFiled: February 24, 2021Date of Patent: October 24, 2023Assignees: Tsinghua University, Nuctech Company LimitedInventors: Zhiqiang Chen, Yuanjing Li, Jianmin Li, Xianghao Wu, Bin Liu, Guocheng An
-
Patent number: 11790547Abstract: An information processing apparatus includes management circuitry that manages three-dimensional data obtained by a scanner in the order of obtaining, display controller circuitry that controls a display to show a three-dimensional image generated from combined data generated by combining a plurality of pieces of three-dimensional data, and input circuitry that accepts an operation by a user. When the input circuitry accepts an operation to select three-dimensional data, the management circuitry sets, from the selected three-dimensional data of last obtained three-dimensional data, data to be deleted. The display controller circuitry controls the display to show a three-dimensional image modified to exclude the data to be deleted.Type: GrantFiled: January 14, 2021Date of Patent: October 17, 2023Assignee: J. MORITA MFG. CORP.Inventors: Keisuke Sorimoto, Masayuki Sano
-
Patent number: 11783482Abstract: The disclosure is related to a panoramic radiography device. The panoramic radiography device may include an image processor and a viewer module. The image processor may be configured to produce a primary panoramic image using a first image layer and a secondary panoramic image using a secondary image layer based on a plurality of image frame data, wherein the second image layer is different from the first image layer in at least one of a number, a position, a shape, an angle, and a thickness. The viewer module may be configured to i) provide a graphic user interface having a primary display area and a secondary display area arranged at a predetermined position of the primary display area, ii) display the primary panoramic image at the primary display area, and iii) display a part of the secondary panoramic image at the secondary display area, wherein the part of the secondary panoramic image corresponds to the predetermined position.Type: GrantFiled: May 31, 2022Date of Patent: October 10, 2023Assignees: VATECH Co., Ltd., VATECH EWOO Holdings Co., Ltd.Inventor: Sung Il Choi
-
Patent number: 11783506Abstract: A method determining an angle of a trailer relative to a vehicle. The method includes generating a projection on a trailer with a projector. An image is obtained of the projection on the trailer with a camera. An angle of the trailer relative to a vehicle is determined by comparing the image of the projection with a known pattern of the projection.Type: GrantFiled: December 22, 2020Date of Patent: October 10, 2023Assignee: Continental Autonomous Mobility US, LLCInventors: Julien Ip, Kyle Patrick Carpenter, Xin Yu
-
Patent number: 11783534Abstract: Embodiments of the present invention provide systems, methods, and computer storage media which retarget 2D screencast video tutorials into an active VR host application. VR-embedded widgets can render on top of a VR host application environment while the VR host application is active. Thus, VR-embedded widgets can provide various interactive tutorial interfaces directly inside the environment of the VR host application. For example, VR-embedded widgets can present external video content, related information, and corresponding interfaces directly in a VR painting environment, so a user can simultaneously access external video (e.g., screencast video tutorials) and a VR painting. Possible VR-embedded widgets include a VR-embedded video player overlay widget, a perspective thumbnail overlay widget (e.g., a user-view thumbnail overlay, an instructor-view thumbnail overlay, etc.), an awareness overlay widget, a tutorial steps overlay widget, and/or a controller overlay widget, among others.Type: GrantFiled: May 17, 2021Date of Patent: October 10, 2023Assignee: Adobe Inc.Inventors: Cuong Nguyen, Stephen Joseph DiVerdi, Balasaravanan Thoravi Kumaravel
-
Patent number: 11774983Abstract: The described positional awareness techniques employing sensory data gathering and analysis hardware with reference to specific example implementations implement improvements in the use of sensors, techniques and hardware design that can enable specific embodiments to find new area to cover by a robot performing an area coverage task of an unexplored area. The sensory data are gathered from an operational camera and one or more auxiliary sensors.Type: GrantFiled: December 20, 2019Date of Patent: October 3, 2023Assignee: Trifo, Inc.Inventors: Zhe Zhang, Qingyu Chen, Yen-Cheng Liu, Weikai Li
-
Patent number: 11770551Abstract: A method includes receiving a video comprising images representing an object, and determining, using a machine learning model, based on a first image of the images, and for each respective vertex of vertices of a bounding volume for the object, first two-dimensional (2D) coordinates of the respective vertex. The method also includes tracking, from the first image to a second image of the images, a position of each respective vertex along a plane underlying the bounding volume, and determining, for each respective vertex, second 2D coordinates of the respective vertex based on the position of the respective vertex along the plane. The method further includes determining, for each respective vertex, (i) first three-dimensional (3D) coordinates of the respective vertex based on the first 2D coordinates and (ii) second 3D coordinates of the respective vertex based on the second 2D coordinates.Type: GrantFiled: December 15, 2020Date of Patent: September 26, 2023Assignee: Google LLCInventors: Adel Ahmadyan, Tingbo Hou, Jianing Wei, Liangkai Zhang, Artsiom Ablavatski, Matthias Grundmann
-
Patent number: 11769275Abstract: This method for inter-predictive encoding of a time-varying 3D point cloud including a series of successive frames divided in 3D blocks into at least one bitstream comprises encoding (20) 3D motion information including a geometric transformation comprising rotation information representative of a rotation transformation and translation information representative of a translation transformation, wherein the translation information comprises a vector ?T representing an estimation error of the translation transformation.Type: GrantFiled: October 11, 2018Date of Patent: September 26, 2023Assignee: INTERDIGITAL VC HOLDINGS, INC.Inventors: Sebastien Lasserre, Saurabh Puri, Kangying Cai, Julien Ricard, Celine Guede
-
Patent number: 11763432Abstract: The present disclosure provides a multi-exposure image fusion (MEF) method based on a feature distribution weight of a multi-exposure image, including: performing color space transformation (CST) on an image, determining a luminance distribution weight of the image, determining an exposure distribution weight of the image, determining a local gradient weight of the image, determining a final weight, and determining a fused image. The present disclosure combines the luminance distribution weight of the image, the exposure distribution weight of the image and the local gradient weight of the image to obtain the final weight, and fuses the input image and the weight with the existing pyramid-based multi-resolution fusion method to obtain the fused image, thereby solving the technical problem that an existing MEF method does not consider the overall feature distribution of the multi-exposure image.Type: GrantFiled: February 24, 2022Date of Patent: September 19, 2023Assignee: XI'AN UNIVERSITY OF POSTS & TELECOMMUNICATIONSInventors: Weihua Liu, Biyan Ma, Ying Liu, Yanchao Gong, Fuping Wang
-
Patent number: 11763563Abstract: A system for monitoring and recording and processing an activity includes one or more cameras for automatically recording video of the activity. A processor and memory associated and in communication with the camera is disposed near the location of the activity. The system may include AI logic configured to identify a user recorded within a video frame captured by the camera. The system may also detect and identify a user when the user is located within a predetermined area. The system may include a video processing engine configured to process images within the video frame to identify the user and may modify and format the video upon identifying the user and the activity. The system may include a communication module to communicate formatted video to a remote video processing system, which may further process the video and enable access to a mobile app of the user.Type: GrantFiled: February 25, 2022Date of Patent: September 19, 2023Assignee: Hole-In-One Media, Inc.Inventor: Kevin R. Imes
-
Patent number: 11762957Abstract: An RGB-D fusion information-based obstacle target classification method includes: collecting an original image through a binocular camera within a target range, and acquiring a disparity map of the original image; collecting a color-calibrated RGB image through a reference camera of the binocular camera within the target range; acquiring an obstacle target through disparity clustering in accordance with the disparity map and the color-calibrated RGB image, and acquiring a target disparity map and a target RGB image of the obstacle target; calculating depth information about the obstacle target in accordance with the target disparity map; and acquiring a classification result of the obstacle target through RGB-D channel information fusion in accordance with the depth information and the target RGB image.Type: GrantFiled: August 12, 2021Date of Patent: September 19, 2023Assignee: Beijing Smarter Eye Technology Co. Ltd.Inventors: Chao Yang, An Jiang, Ran Meng, Hua Chai, Feng Cui
-
Patent number: 11763365Abstract: There is provided, in accordance with an embodiment of the present invention, an apparatus, system, and method to provide personalized online product fitting, according to some embodiments. One embodiment of the invention includes a method for personalized shopping has the steps of an automated shopping assistant system accessing product data, a matchmaking system accessing user history data, the matchmaking system accessing user preference data, the matchmaking system accessing user anatomical data acquired from an automated shopping assistant apparatus, the automated shopping assistant system matching user history, preference and anatomical data with data to generate a personalized matching system.Type: GrantFiled: June 27, 2018Date of Patent: September 19, 2023Assignee: NIKE, Inc.Inventors: David Bleicher, Tamir Lousky
-
Patent number: 11763478Abstract: Various implementations disclosed herein include devices, systems, and methods that generate floorplans and measurements using a three-dimensional (3D) representation of a physical environment generated based on sensor data.Type: GrantFiled: January 15, 2021Date of Patent: September 19, 2023Assignee: Apple Inc.Inventors: Yang Yang, Boyuan Sun, Afshin Dehghan, Feng Tang, Bin Liu, Fengfu Li
-
Patent number: 11763579Abstract: A method, computer system and computer-readable medium for determining a surface pattern for a target object using an evolutionary algorithm such as a genetic algorithm, a parameterized texture-generating function, a 3D renderer for rendering images of a 3D model of the target object with a texture obtained from the parameterized texture generating function, and an object recognition model to process the images and predict whether or not the image contains an object of the target object's type or category. Sets of parameters are generated using the evolutionary algorithm and the accuracy of the object recognition model's prediction of the images with the 3D model textured according to each set of parameters is used to determine a fitness score, by which sets of parameters are scored for the purpose of obtaining future further generations of sets of parameters, such as by genetic algorithm operations such as mutation and crossover operations.Type: GrantFiled: May 7, 2020Date of Patent: September 19, 2023Assignee: The Secretary of State for DefenceInventor: Geraint Johnson
-
Patent number: 11763492Abstract: In some embodiments, a method includes receiving a first image and a second image from a stereo camera pair. The method includes selecting a first row of pixels from the rectified image and a set of rows of pixels from the second image and comparing the first row of pixels with each row of pixels from the set of rows of pixels to determine disparity values. The method includes determining a pair of rows of pixels having the first row of pixels and a second row of pixels from the set of rows of pixels. The pair of rows of pixels has an offset no greater than an offset between the first row of pixels and each row of pixels from remaining rows of pixels. The method includes adjusting, based on the offset, the relative rotational position between the first stereo camera and the second stereo camera.Type: GrantFiled: June 10, 2022Date of Patent: September 19, 2023Assignee: PlusAI, Inc.Inventors: Anurag Ganguli, Timothy P. Daly, Jr., Mayank Gupta, Wenbin Wang, Huan Yang Chang
-
Patent number: 11763570Abstract: An obstacle detection device includes a stereo camera that captures IR images on the right side and left side; a point cloud data creation unit that creates point cloud data having three-dimensional position information from the IR images captured by the stereo camera; an obstacle detection unit that detects an obstacle according to point cloud data; an invalid area identification unit that identifies whether an invalid area with no point cloud is present in point cloud data; and a target recognition unit that recognizes a particular target in the invalid area with no point cloud according to an IR image. When a particular target is recognized by the target recognition unit, the obstacle detection unit does not determine the invalid area with no point cloud to be an obstacle.Type: GrantFiled: June 23, 2021Date of Patent: September 19, 2023Assignee: Alps Alpine Co., LTD.Inventors: Kaoru Mashiko, Tomoki Nakagawa, Tatsuya Matsui
-
Patent number: 11763469Abstract: A registration technique is provided that can combine one or more related registrations to enhance accuracy of a registration of image volumes. A registration relationship between a first source volume and a target volume and a registration relationship between the first source volume and a second source volume are concatenated to provide an estimate of a registration relationship between the second source volume and the target volume. The estimate is utilized to inform the direct registration of the second source volume to the target volume or utilized in place of the direct registration.Type: GrantFiled: June 4, 2019Date of Patent: September 19, 2023Assignee: MIM Software Inc.Inventor: Jonathan William Piper
-
Patent number: 11750943Abstract: Provided is a method and a device for correcting lateral chromatic aberration, a storage medium and a computer equipment. In the method, a relationship model between lens position and magnitude of LCA is constructed based on preset parameters of lens positions, and the relationship model is stored as calibration data; system parameters of a camera to be corrected and pre-stored calibration data are obtained; the LCA of the camera to be corrected is obtained by calculating the system parameters; and the LCA is corrected by the calibration data. With the method, the LCA of the lens can be removed when the focus distance is changed, and the method is suitable for mass-production.Type: GrantFiled: October 20, 2021Date of Patent: September 5, 2023Assignee: AAC Optics Solutions Pte. Ltd.Inventors: Vida Fakour Sevom, Dmytro Paliy, Juuso Gren
-
Patent number: 11748947Abstract: A display method is a display method performed by a display device that operates in conjunction with a mobile object, and includes: determining which one of first surrounding information, which is video showing a surrounding condition of the mobile object and is generated using two-dimensional information, and second surrounding information, which is video showing the surrounding condition of the mobile object and is generated using three-dimensional data, is to be displayed, based on a driving condition of the mobile object; and displaying the one of the first surrounding information and the second surrounding information that is determined to be displayed.Type: GrantFiled: December 17, 2021Date of Patent: September 5, 2023Assignee: PANASONIC INTELLECTUAL PROPERTY CORPORATION OF AMERICAInventors: Tatsuya Koyama, Takahiro Nishi, Toshiyasu Sugio, Tadamasa Toma, Satoshi Yoshikawa, Toru Matsunobu
-
Patent number: 11743585Abstract: An electronic apparatus is provided. The electronic apparatus includes at least one camera module including at least one lens, a first sensor configured to detect a motion of the electronic apparatus, and at least one processor configured to perform a first focusing operation of determining a target position of the at least one lens by focusing processing on a subject and moving the at least one lens to the target position, and perform a second focusing operation of, according to a determination that a first condition that a photographing distance, which is a distance to the subject, is less than a distance reference value and a depth-of-field value is less than or equal to a depth reference value is satisfied, additionally driving the at least one lens based on a calculated focusing correction value for compensating a motion.Type: GrantFiled: March 31, 2022Date of Patent: August 29, 2023Assignee: Samsung Electronics Co., Ltd.Inventors: Kioh Jung, Dongyoul Park, Soonkyoung Choi
-
Patent number: 11742962Abstract: A method of monitoring an antenna array comprises generating first image data at a first time. The first image data is reproducible as a first image of an antenna array unit. The method further comprises generating second image data at a second time. The second image data is reproducible as a second image of the antenna array unit. The method further comprises comparing the first image data and the second image data. The method further comprises transmitting an alert that is indicative of the presence of at least one with the antenna array unit at the second time. The alert is transmitted in response to the comparison between the first image data and the second image data indicating the presence of the at least one fault with the antenna array unit at the second time.Type: GrantFiled: September 13, 2021Date of Patent: August 29, 2023Assignee: Quanta Computer Inc.Inventor: Chi-Sen Yen
-
Patent number: 11741714Abstract: A system for monitoring and recording and processing an activity includes one or more cameras for automatically recording video of the activity. A processor and memory associated and in communication with the camera is disposed near the location of the activity. The system may include AI logic configured to identify a user recorded within a video frame captured by the camera. The system may also detect and identify a user when the user is located within a predetermined area. The system may include a video processing engine configured to process images within the video frame to identify the user and may modify and format the video upon identifying the user and the activity. The system may include a communication module to communicate formatted video to a remote video processing system, which may further process the video and enable access to a mobile app of the user.Type: GrantFiled: August 12, 2022Date of Patent: August 29, 2023Assignee: HOLE-IN-ONE MEDIA, INC.Inventor: Kevin R. Imes
-
Patent number: 11740358Abstract: A method of and system for processing Light Detection and Ranging (LIDAR) point cloud data. The method is executable by an electronic device, communicatively coupled to a LIDAR installed on a vehicle, the LIDAR having a plurality of lasers for capturing LIDAR point cloud data. The method includes receiving a first LIDAR point cloud data captured by the LIDAR; executing a Machine Learning Algorithm (MLA) for: analyzing a first plurality of LIDAR points of the first point cloud data in relation to a response pattern of the plurality of lasers; retrieving a grid representation data of a surrounding area of the vehicle; determining if the first plurality of LIDAR points is associated with a blind spot, the blind spot preventing a detection algorithm of the electronic device to detect presence of at least one object surrounding the vehicle conditioned on the at least one object is present.Type: GrantFiled: July 15, 2020Date of Patent: August 29, 2023Assignee: YANDEX SELF DRIVING GROUP LLCInventors: Boris Konstantinovich Yangel, Maksim Ilich Stebelev
-
Patent number: 11743994Abstract: A load control system may be configured using a graphical user interface software. The graphical user interface software may display a first icon and a second icon. The first icon may represent a first electrical device and the second icon may represent a second electrical device. The first icon and the second icon may represent the relative location of the first electrical device and the second electrical device within a load control environment. The graphical user interface software may display a line from a selected icon (e.g., first icon) to a cursor. The graphical user interface software may adjust the line from the selected icon, for example, as the cursor moves. The graphical user interface software may define and store an association between the first electrical device and a second electrical device, for example, in response to the user selecting the first icon and the second icon.Type: GrantFiled: November 30, 2020Date of Patent: August 29, 2023Assignee: Lutron Technology Company LLCInventors: Ritika Arora, Manisha Dahiya Baluja, John N. Callen, Erica L. Clymer, Sanjeev Kumar, Mark Law, Sandeep Mudabail Raghuram, Anurag Singh, Christopher Spencer