Patents Examined by Jayesh A Patel
-
Patent number: 12046001Abstract: The disclosure provides example visual localization methods. One method includes that a terminal device obtains an image of a building. The terminal device generates a descriptor based on the image. The descriptor includes information about a horizontal viewing angle between a first vertical feature line and a second vertical feature line in the image. The first vertical feature line indicates a first facade intersection line of the building, and the second vertical feature line indicates a second facade intersection line of the building. The terminal device performs matching in a preset descriptor database based on the descriptor, to obtain localization information of a photographing place of the image.Type: GrantFiled: February 8, 2022Date of Patent: July 23, 2024Assignee: Huawei Technologies Co., Ltd.Inventors: Ran Ding, Yan Zhou, Yongliang Wang
-
Patent number: 12046015Abstract: An image classification system for tracking a net sport is illustrated. The system includes a visual device configured to capture video footage of a match. The computing device is communicatively connected to the visual device and is configured to receive the video footage of the match from the visual device, translate video footage of the match from the visual device into a data representation by logging an event with a timestamp, and display data representation to a user.Type: GrantFiled: June 5, 2023Date of Patent: July 23, 2024Inventor: David Velardo
-
Patent number: 12039712Abstract: One aspect provides operating a mobile pipe inspection platform to obtain two or more types of sensor data for the interior of a pipe; analyzing, using a processor, the two or more types of sensor data using a trained model, where the trained model is trained using a dataset including training sensor data of pipe interiors; the analyzing including performing: identifying, using a processor, a pipe feature location using a first type of the two or more types of sensor data; and classifying, using a processor, an identified pipe feature using a second type of the two or more types of sensor data; and thereafter producing, using a processor, an output including an indication of the classified pipe feature. Other aspects are described and claimed.Type: GrantFiled: February 26, 2021Date of Patent: July 16, 2024Assignee: RedZone Robotics, Inc.Inventors: Justin Starr, Galin Konakchiev, Foster J Salotti, Mark Jordan, Nate Alford, Thorin Tobiassen, Todd Kueny, Jason Mizgorski
-
Patent number: 12039762Abstract: An attribute-based point cloud strip division method. The method comprises: first, performing spatial division of a certain depth on a point cloud to obtain a plurality of local point clouds; and then, sorting the attribute values in the local point clouds, and on the basis of such, further performing point cloud strip division so as to obtain point cloud strips that have low geometric overhead and a uniform number of points. By means of comprehensively using the spatial position and attribute information of the point clouds, the points having similar attributes and related spatial positions are gathered as much as possible in one strip during strip division, which is convenient for making full use of the redundancy of the attribute information between adjacent points, and improving the performance of point cloud attribute compression.Type: GrantFiled: April 12, 2019Date of Patent: July 16, 2024Assignee: PEKING UNIVERSITY SHENZHEN GRADUATE SCHOOLInventors: Ge Li, Yiting Shao
-
Patent number: 12036060Abstract: The present disclosure relates to an artificial intelligence-based oral CT automatic color conversion device and a method for driving same. The artificial intelligence-based oral CT automatic color conversion device according to an embodiment of the present invention comprises: a storage unit which stores a color conversion image related to the bone density of an alveolar bone, which is previously prepared by a user; and a control unit which, when an oral CT input image of a patient is received, converts color of the received oral CT input image on the basis of a predetermined parameter value, and adjusts the predetermined parameter value by using an error value resulting from the converted oral CT input image and the (pre)stored color conversion image to automatically convert the color of an oral CT input image to be input later.Type: GrantFiled: October 8, 2020Date of Patent: July 16, 2024Assignees: MEGAGEN IMPLANT CO., LTD., KYUNGPOOK NATIONAL UNIVERSITY HOSPITAL, KYUNGPOOK NATIONAL UNIVERSITY INDUSTRY-ACADEMIC COOPERATION FOUNDATIONInventors: Kwang Bum Park, Keun Oh Park, Tae Gyoun Lee, Sung Moon Jeong, Dong Won Woo
-
Patent number: 12033340Abstract: A system including an acquisition unit configured to acquire, from a user via a communication device associated with the user, target object data including a feature of a target object selected by the user, an analysis unit configured to analyze whether the target object data that has been acquired by the acquisition unit includes, as the feature, at least one of data of a proper noun or data of a character string related to the target object, and whether the target object data includes data of a color related to the target object, and an estimation unit configured to estimate a distance from the target object to the user, based on an analysis result of the analysis unit, wherein the estimation unit estimates the distance from the target object to the user such that the distance from the target object to the user in a case where the target object data includes at least one of the data of the proper noun or the data of the character string is shorter than the distance from the target object to the user in a caType: GrantFiled: March 24, 2022Date of Patent: July 9, 2024Assignee: HONDA MOTOR CO., LTD.Inventors: Teruhisa Misu, Naoki Hosomi, Kentaro Yamada
-
Patent number: 12033253Abstract: Disclosed are augmented reality (AR) personalization systems to enable a user to edit and personalize presentations of real-world typography in real-time. The AR personalization system captures an image depicting a physical location via a camera coupled to a client device. For example, the client device may include a mobile device that includes a camera configured to record and display images (e.g., photos, videos) in real-time. The AR personalization system causes display of the image at the client device, and scans the image to detect occurrences of typography within the image (e.g., signs, billboards, posters, graffiti).Type: GrantFiled: November 17, 2021Date of Patent: July 9, 2024Assignee: SNAP INC.Inventors: Piers Cowburn, Qi Pan, Eitan Pilipski
-
Patent number: 12026860Abstract: System and method for detecting the authenticity of products by detecting a unique chaotic signature. Photos of the products are taken at the plant and stored in a database/server. The server processes the images to detect for each authentic product a unique authentic signature which is the result of a manufacturing process, a process of nature etc. To detect whether the product is genuine or not at the store, the user/buyer may take a picture of the product and send it to the server (e.g. using an app installed on a portable device or the like). Upon receipt of the photo, the server may process the receive image in search for a pre-detected and/or pre-stored chaotic signature associated with an authentic product. The server may return a response to the user indicating the result of the search. A feedback mechanism may be included to guide the user to take a picture at a specific location of the product where the chaotic signature may exist.Type: GrantFiled: January 6, 2022Date of Patent: July 2, 2024Inventor: Guy Le Henaff
-
Patent number: 12026956Abstract: Techniques are discussed herein for controlling autonomous vehicles within a driving environment, including generating and using bounding contours associated with objects detected in the environment. Image data may be captured and analyzed to identify and/or classify objects within the environment. Image-based and/or lidar-based techniques may be used to determine depth data associated with the objects, and a bounding contour may be determined based on the object boundaries and associated depth data. An autonomous vehicle may use the bounding contours of objects within the environment to classify the objects, predict the positions, poses, and trajectories of the objects, and determine trajectories and perform other vehicle control actions while safely navigating the environment.Type: GrantFiled: October 28, 2021Date of Patent: July 2, 2024Assignee: Zoox, Inc.Inventors: Scott M. Purdy, Subhasis Das, Derek Xiang Ma, Zeng Wang
-
Patent number: 12020512Abstract: Methods, systems, and computer-readable storage media for determining that a subject is a live person using a color-coded sequence including a sequence of colors. A subject is illuminated in accordance with the sequence of colors. A sequence of images of the subject is captured, where the sequence of images are temporally synchronized with illumination by the color-coded sequence. A filtered response image is generated, by a matched filtering process on the sequence of images using the selected color-coded sequence. A determination is made, based on structural features around an eye region of the filtered response image, that the subject is a live person. Responsive to determining that the subject is a live person, initiating an authentication process to authenticate the subject.Type: GrantFiled: September 17, 2021Date of Patent: June 25, 2024Assignee: Jumio CorporationInventors: Spandana Vemulapalli, David Hirvonen
-
Patent number: 12002240Abstract: The present disclosure relates to a method for determining a pose of an object in a local coordinate system of a robotic machine, the method including: capturing at least one image of a first face of the object and at least one image of at least a portion of a second face of the object; generating a point cloud representation of at least part of the object using image data obtained from the captured images of the first and second faces; fitting a first plane to the first face of the object and fitting a second plane to the second face of the object using the point cloud representation; determining a pose of the first plane and a pose of the second plane; retrieving a shape model of at least the first face of the object; locating the shape model in the local coordinate system using at least in part the at least one image of the first face; and, determining the pose of the object in the local coordinate system. A vision system and robotic machine are also disclosed.Type: GrantFiled: September 4, 2019Date of Patent: June 4, 2024Assignee: FASTBRICK IP PTY LTD.Inventor: John Francis Taylor
-
Patent number: 12002250Abstract: A method for processing a candidate digital image includes defining a set of noteworthy points in the candidate digital image. A set of at least three noteworthy points is selected to comprise a notable departure point, a notable arrival point, and a third notable point not aligned with the notable departure point and the notable arrival point. A set of at least one route, between the notable departure point and the notable arrival point, is defined. The route passes through all of the selected notable points. Local characteristics, of the pixels located along the route, are extracted. The signal, corresponding to the variation in the magnitude of the local characteristics as a function of each pixel along each defined route, is recorded in the form of a fingerprint.Type: GrantFiled: January 8, 2020Date of Patent: June 4, 2024Assignee: SURYSInventors: Gaël Mahfoudi, Mohammed Amine Ouddan
-
Patent number: 11995511Abstract: The present technology relates to image signal processing. One aspect of the present technology involves analyzing reference imagery gathered by a camera system to determine which parts of an image frame offer high probabilities of—relative to other image parts—containing decodable watermark data. Another aspect of the present technology whittles-down such determined image frame parts based on detected content (e.g., a cereal box) vs expected background within such determined image frame parts.Type: GrantFiled: June 9, 2020Date of Patent: May 28, 2024Assignee: Digimarc CorporationInventors: Vojtech Holub, Tomas Filler
-
Patent number: 11995793Abstract: The present invention provides a generation method for a 3D asteroid dynamic map and a portable terminal. The method comprises: obtaining a panorama image; identifying the panorama image and segmenting into a sky region, a human body region, and a ground region; calculating a panoramic depth map for the sky region, the human body region, and the ground region; respectively transforming the panorama image and the panoramic depth map to generate an asteroid image and an asteroid depth map; generating an asteroid view under a virtual viewpoint; and rendering to generate a 3D asteroid dynamic map. By automatically generating the asteroid view under the virtual viewpoint, and synthesizing and rendering same, the technical solution of the present invention generates the asteroid dynamic map having a 3D effect from the panorama image.Type: GrantFiled: October 21, 2019Date of Patent: May 28, 2024Assignee: ARASHI VISION INC.Inventors: Ruidong Gao, Wenjie Jiang, Li Zhu
-
Patent number: 11989925Abstract: An image matching apparatus matching a first image against a second image includes an acquiring unit, a generating unit, and a determining unit. The acquiring unit acquires a frequency feature of the first image and a frequency feature of the second image. The generating unit synthesizes the frequency feature of the first image and the frequency feature of the second image, and generates a quantized synthesized frequency feature in which a value of an element is represented by a binary value or a ternary value. The determining unit calculates a score indicating a degree to which the quantized synthesized frequency feature is a square wave having a single period, and matches the first image against the second image based on the score.Type: GrantFiled: May 16, 2019Date of Patent: May 21, 2024Assignee: NEC CORPORATIONInventors: Toru Takahashi, Rui Ishiyama, Kengo Makino, Yuta Kudo
-
Patent number: 11987236Abstract: A method provided for 3D object localization predicts pairs of 2D bounding boxes. Each pair corresponds to a detected object in each of the two consecutive input monocular images. The method generates, for each detected object, a relative motion estimation specifying a relative motion between the two images. The method constructs an object cost volume by aggregating temporal features from the two images using the pairs of 2D bounding boxes and the relative motion estimation to predict a range of object depth candidates and a confidence score for each object depth candidate and an object depth from the object depth candidates. The method updates the relative motion estimation based on the object cost volume and the object depth to provide a refined object motion and a refined object depth. The method reconstructs a 3D bounding box for each detected object based on the refined object motion and refined object depth.Type: GrantFiled: August 23, 2021Date of Patent: May 21, 2024Assignee: NEC CorporationInventors: Pan Ji, Buyu Liu, Bingbing Zhuang, Manmohan Chandraker, Xiangyu Chen
-
Patent number: 11972580Abstract: This application relates to the field of video processing, and discloses a video stitching method and apparatus, an electronic device, and a non-transitory computer-readable storage medium, where the video stitching method includes: detecting a similarity between a first image and a second image, the first image being an image frame of a first to-be-stitched video, and the second image being an image frame of a second to-be-stitched video; then determining a motion vector of the first image relative to the second image when the similarity meets a preset condition; and then determining at least one compensated frame between the first image and the second image according to the motion vector, and stitching the first image and the second image based on the at least one compensated frame to stitch the first to-be-stitched video and the second to-be-stitched video.Type: GrantFiled: February 24, 2021Date of Patent: April 30, 2024Assignee: Tencent Technology (Shenzhen) Company LimitedInventor: Shuo Deng
-
Patent number: 11967158Abstract: An image processor includes a boundary line detection portion configured to detect a boundary line by using an image in accordance with an image signal acquired by an imaging device, a parking frame detection portion configured to detect a parking frame by using the detected boundary line, an in-frame scanning portion configured to acquire a dividing point that divides a pair of facing sides of the boundary lines that define the parking frame and to scan the image through the dividing point to detect an edge, a storage portion configured to store a state of an edge and an attribute of the parking frame by linking them, and a determination portion configured to determine an attribute of the parking frame by using a state of the detected edge and the stored attribute of the parking frame that is linked to the state of the edge stored in the storage portion.Type: GrantFiled: May 12, 2021Date of Patent: April 23, 2024Assignee: FAURECIA CLARION ELECTRONICS CO., LTD.Inventors: Takayuki Kaneko, Yousuke Haga
-
Patent number: 11961246Abstract: Provided are a depth image processing method, a depth image processing apparatus, an electronic device and a readable storage medium. The method includes: (101) obtaining consecutive n depth image frames; (102) determining a trusted pixel and determining a smoothing factor corresponding to the trusted pixel; (103) determining a time similarity weight; (104) determining a content similarity; (105) determining a content similarity weight based on the content similarity and the smoothing factor; and (106) performing filtering processing on a depth value of the trusted pixel based on all time similarity weights and all content similarity weights.Type: GrantFiled: December 10, 2021Date of Patent: April 16, 2024Assignee: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP. LTDInventor: Jian Kang
-
Patent number: 11947005Abstract: In one implementation, a method includes: obtaining a first depth estimation characterizing a distance between the device and a surface in a real-world environment, wherein the first depth estimation is derived from image data including a representation of the surface; receiving, using the audio transceiver, an acoustic reflection of an acoustic wave, wherein the acoustic wave is transmitted in a known direction relative to the device; and determining a second depth estimation based on the acoustic reflection, wherein the second depth estimation characterizes the distance between the device and the surface in the real-world environment; and determining a confirmed depth estimation characterizing the distance between the device and the surface based on resolving any mismatch between the first depth estimation and the second depth estimation.Type: GrantFiled: November 1, 2022Date of Patent: April 2, 2024Assignee: APPLE INC.Inventors: Christopher T. Eubank, Ryan S. Carlin