Target Tracking Or Detecting Patents (Class 382/103)
  • Patent number: 12167171
    Abstract: A vehicular vision system includes a camera disposed at a vehicle and capturing image data. The vehicular vision system, via processing at an electronic control unit of a first frame of image data captured by the camera, detects a first object exterior of the vehicle and determines an attribute of the first object. The vehicular vision system, via processing at the electronic control unit of a second frame of image data captured by the camera, detects a second object exterior of the vehicle and determines the attribute of the second object. The system determines whether the first object and the second object are the same object based on a similarity measurement. The vehicular vision system, responsive to determining that the first object and the second object are the same object, merges the attribute of the first object with the attribute of the second object.
    Type: Grant
    Filed: September 27, 2022
    Date of Patent: December 10, 2024
    Assignee: Magna Electronics Inc.
    Inventor: Liang Zhang
  • Patent number: 12166941
    Abstract: Disclosed are a picture encryption method and apparatus, a computer device, a storage medium and a program product. The method includes: acquiring N pictures having a time sequence, N being an integer equal to or greater than 2; performing feature extraction on the N pictures to acquire a picture feature of each of the N pictures; successively performing target prediction on the N pictures according to the time sequence to obtain prediction information of the each of the N pictures, the target prediction referring to a prediction on the each of the N pictures based on status information, and the status information being information which is updated based on picture features of pictures that have been predicted; and encrypting the N pictures based on the prediction information of the each of the N pictures.
    Type: Grant
    Filed: September 9, 2022
    Date of Patent: December 10, 2024
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Weiming Yang, Huizhong Tang, Shaoming Wang, Runzeng Guo
  • Patent number: 12165428
    Abstract: A portion rectangle estimation unit estimates a specific portion of a human body from an image, and outputs a first portion rectangle. A first human body rectangle estimation unit estimates the human body corresponding to the specific portion in the first portion rectangle based on coordinates of the first portion rectangle, and outputs a first human body rectangle. A second human body rectangle estimation unit estimates the human body from the image, and outputs a second human body rectangle. A second portion rectangle estimation unit estimates the specific portion corresponding to the human body in the second human body rectangle based on coordinates of the second human body rectangle, and outputs a second portion rectangle. A human body integration unit integrates duplicate human rectangles based on at least a relationship between the portion rectangles with respect to a plurality of pairs of mutually corresponding portion rectangles and human body rectangles.
    Type: Grant
    Filed: March 30, 2021
    Date of Patent: December 10, 2024
    Assignee: NEC CORPORATION
    Inventor: Junichi Kamimura
  • Patent number: 12165021
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media for evaluating robot learning. In some implementations, a system receives classification examples from a plurality of remote devices over a communication network. The classification examples can include (i) a data representation generated by a remote device based on sensor data captured by the remote device and (ii) a classification corresponding to the data representation. The system assigns quality scores to the classification examples based on a level of similarity of the data representations with other data representations. The system selects a subset of the classification examples based on the quality scores assigned to the classification examples. The system trains a machine learning model using the selected subset of the classification examples.
    Type: Grant
    Filed: May 4, 2021
    Date of Patent: December 10, 2024
    Assignee: Google LLC
    Inventors: Nareshkumar Rajkumar, Patrick Leger
  • Patent number: 12167014
    Abstract: Systems, apparatuses and methods may provide for source device technology that identifies a plurality of object regions in a video frame, automatically generates context information for the video frame on a per-object region basis and embeds the context information in a signal containing the video frame. Additionally, playback device technology may decode a signal containing a video frame and embedded context information, identifies a plurality of object regions in the video frame based on the embedded context information, and automatically selects one or more post-processing configurations for the video frame on a per-object region basis.
    Type: Grant
    Filed: May 22, 2020
    Date of Patent: December 10, 2024
    Assignee: Intel Corporation
    Inventors: Changliang Wang, Mohammad R. Haghighat, Wei Hu, Tao Xu, Tianmi Chen, Bin Yang, Jia Bao, Raul Diaz
  • Patent number: 12165761
    Abstract: In order to generate annotated ground truth data for training a machine learning model for inferring a desired scan configuration of an medical imaging system from an observed workflow scene during exam preparation, a system is provided that comprises a sensor data interface configured to access a measurement image of a patient positioned for an imaging examination. The measurement image is generated on the basis of sensor data obtained from a sensor arrangement, which has a field of view including at least part of an area, where the patient is positioned for imaging. The system further comprises a medical image data interface configured to access a medical image of the patient obtained from a medical imaging apparatus during the imaging examination. The patient is positioned in a given geometry with respect to a reference coordinate system of the medical imaging apparatus. The system further comprises an exam metadata interface configured to access exam metadata of the imaging examination.
    Type: Grant
    Filed: July 3, 2020
    Date of Patent: December 10, 2024
    Assignee: KONINKLIJKE PHILIPS N.V.
    Inventors: Julien Thomas Senegas, Sascha Krueger
  • Patent number: 12165387
    Abstract: A method of inspecting a composite structure formed of plies of tows is provided. The method involves receiving an image of an upper ply overlapping lower plies, the upper ply tow ends defining a boundary between plies, and applying extracted sub-images to a trained machine learning model to detect the upper or lower ply. Probability maps are produced in which pixels of the sub-images are associated with probabilities the pixels belong to an object class for the upper or lower ply. The method may also involve transforming the probability maps into reconstructed sub-images, stitching together a composite image, and applying the composite image to a feature detector to detect locations of tow ends of the upper ply. The method may also involve comparing the locations to as-designed locations of the tow ends, inspecting the composite structure, and indicating a result of the comparison.
    Type: Grant
    Filed: December 6, 2021
    Date of Patent: December 10, 2024
    Assignees: The Boeing Company, University of Washington
    Inventors: Adriana W. Blom-Schieber, Wei Guo, Ashis G. Banerjee, Ekta U. Samani
  • Patent number: 12165418
    Abstract: A first object detection unit detects an object from a captured image and outputs object position information, in a non-high speed processing mode. A switching unit switches to a high-speed processing mode when the first object detection unit outputs the object position information. An image trimming unit extracts a trimmed image from the captured image based on the object position information output from the first object detection unit, in the high-speed processing mode. A second object detection unit detects an object from the trimmed image and outputs the object position information. A protrusion determination unit determines whether the object thus detected protrudes from the trimmed image. When it is determined that the object thus detected protrudes from the trimmed image, the switching unit switches to the non-high speed processing mode.
    Type: Grant
    Filed: April 28, 2022
    Date of Patent: December 10, 2024
    Assignee: JVCKENWOOD Corporation
    Inventor: Yincheng Yang
  • Patent number: 12165339
    Abstract: An information processing apparatus (2000) includes a first analyzing unit (2020), a second analyzing unit (2040), and an estimating unit (2060). The first analyzing unit (2020) calculates a flow of a crowd in a capturing range of a fixed camera (10) using a first surveillance image (12). The second analyzing unit (2040) calculates a distribution of an attribute of objects in a capturing range of a moving camera (20) using a second surveillance image (22). The estimating unit (2060) estimates an attribute distribution for a range that is not included in the capturing range of the moving camera (20).
    Type: Grant
    Filed: August 16, 2023
    Date of Patent: December 10, 2024
    Assignee: NEC CORPORATION
    Inventors: Ryoma Oami, Katsuhiko Takahashi, Yusuke Konishi, Hiroshi Yamada, Hiroo Ikeda, Junko Nakagawa, Kosuke Yoshimi, Ryo Kawai, Takuya Ogawa
  • Patent number: 12165355
    Abstract: To improve pose estimation accuracy, a pose estimation apparatus according to the present invention extracts a person area from an image, and generates person area image information, based on an image of the extracted person area. The pose estimation apparatus according to the present invention further extracts a joint point of a person from the image, and generates joint point information, based on the extracted joint point. Then, the pose estimation apparatus according to the present invention generates feature value information, based on both of the person area image information and the joint point information. Then, the pose estimation apparatus according to the present invention estimates a pose of a person included in the image, based on an estimation model in which the feature value information is an input, and pose estimation result is an output.
    Type: Grant
    Filed: February 16, 2022
    Date of Patent: December 10, 2024
    Assignee: NEC CORPORATION
    Inventors: Tetsuo Inoshita, Yuichi Nakatani
  • Patent number: 12165359
    Abstract: A spatial recognition device has a spatial recognition unit for generating spatial data by recognizing a three-dimensional shape in a real space, and a self-location estimating unit for estimating the self-location in real space. A spatial data unification unit unifies the spatial data and spatial data generated by another spatial recognition device to generate unified spatial data that is expressed in the same coordinate system having the same origin. A self-location sharing unit transmits the self-location based on the unified spatial data, to another spatial recognition device. The self-location sharing unit acquires the self-location from another spatial recognition device) when the self-location estimating unit cannot estimate the self-location.
    Type: Grant
    Filed: May 25, 2020
    Date of Patent: December 10, 2024
    Assignees: INFORMATIX INC., CHIYODA SOKKI CO., LTD.
    Inventors: Koji Konno, Keitaro Hirano, Yukio Hirahara
  • Patent number: 12159422
    Abstract: A virtual or augmented reality display system can include a first sensor to provide measurements of a user's head pose over time and a processor to estimate the user's head pose based on at least one head pose measurement and based on at least one calculated predicted head pose. The processor can combine the head pose measurement and the predicted head pose using one or more gain factors. The one or more gain factors may vary based upon the user's head pose position within a physiological range of movement.
    Type: Grant
    Filed: September 13, 2022
    Date of Patent: December 3, 2024
    Assignee: Magic Leap, Inc.
    Inventors: Jose Felix Rodriguez, Ricardo Martinez Perez
  • Patent number: 12159450
    Abstract: The invention provides a model training method and system that uses pretrained features of a teacher neural network trained on a billion-size dataset to train a student neural network. The model training method leverages the teacher neural network to design a more stable loss function that works well with more sophisticated learning rate schedules to reduce training time and make the augmentation designing process more natural.
    Type: Grant
    Filed: August 31, 2022
    Date of Patent: December 3, 2024
    Assignee: VINBRAIN JOINT STOCK COMPANY
    Inventors: Minh Thanh Huynh, Chanh D T Nguyen, Steven Quoc Hung Truong
  • Patent number: 12159212
    Abstract: A digital processing engine is configured to receive input data from a memory. The input data comprises first input channels. The digital processing engine is further configured to convolve, with a convolution model, the input data. The convolution model comprises a first filter layer configured to generate first intermediate data having first output channels. A number of the first output channels is less than a number of the first input channels. The convolution model further comprises a second filter layer comprising shared spatial filters and is configured to generate second intermediate data by convolving each of the first output channels with a respective one of the shared spatial filters. Each of the shared spatial filters comprises first weights. The digital processing engine is further configured to generate output data from the second intermediate data and store the output data in the memory.
    Type: Grant
    Filed: February 22, 2021
    Date of Patent: December 3, 2024
    Assignee: XILINX, INC.
    Inventor: Ephrem Wu
  • Patent number: 12159465
    Abstract: A method for an end-to-end boundary lane detection system is described. The method includes gridding a red-green-blue (RGB) image captured by a camera sensor mounted on an ego vehicle into a plurality of image patches. The method also includes generating different image patch embeddings to provide correlations between the plurality of image patches and the RGB image. The method further includes encoding the different image patch embeddings into predetermined categories, grid offsets, and instance identifications. The method also includes generating lane boundary keypoints of the RGB image based on the encoding of the different image patch embeddings.
    Type: Grant
    Filed: April 14, 2022
    Date of Patent: December 3, 2024
    Assignees: TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Kun-Hsin Chen, Shunsho Kaku, Jie Li, Steven Parkison, Jeffrey M. Walls, Kuan-Hui Lee
  • Patent number: 12159427
    Abstract: An object position determining system comprising: at least one light source, configured to emit light; at least one optical sensor, configured to sense optical data generated based on reflected light of the light; and a processing circuit, configured to compute distance information between the optical sensor and an object which generates the reflected light. The processing circuit further determines a position of the object according to the distance information.
    Type: Grant
    Filed: January 19, 2022
    Date of Patent: December 3, 2024
    Assignee: PixArt Imaging Inc.
    Inventors: Ming Shun Manson Fei, Sen-Huang Huang, Chi-Chieh Liao
  • Patent number: 12159359
    Abstract: Augmented reality systems provide graphics over views from a mobile device for both in-venue and remote viewing of a sporting or other event. A server system can provide a transformation between the coordinate system of a mobile device (smart phone, tablet computer, head mounted display) and a real world coordinate system. Requested graphics for the event are displayed over a view of an event.
    Type: Grant
    Filed: December 19, 2022
    Date of Patent: December 3, 2024
    Assignee: Quintar, Inc.
    Inventors: Timothy P. Heidmann, Sankar Jayaram, Wayne O. Cochran, John Buddy Scott, John Harrison, Durga Raj Mathur, Richard St Clair Bailey
  • Patent number: 12159461
    Abstract: An information acquisition support apparatus includes: an acquisition unit that acquires a plurality of notifications related to an event; an analysis unit that analyzes the plurality of notifications on the basis of an analysis criterion; and an event state generation unit that generates event state information obtained by integrating information indicated by the plurality of notifications on the basis of an analysis result of the plurality of notifications and an information integration criterion, and thereby, the information acquisition support apparatus supports rapid and accurate grasping of a state of a site where an event is occurring.
    Type: Grant
    Filed: December 13, 2021
    Date of Patent: December 3, 2024
    Assignee: NEC CORPORATION
    Inventors: Masaya Tanaka, Masako Wada, Masayuki Ogawa, Muneaki Onozato, Takamichi Yamazaki
  • Patent number: 12159430
    Abstract: An automatic precision calculation method of a pose of a rotational linear array scanning image includes: obtaining a collection parameter of the rotational linear array scanning image and a camera intrinsic parameter; based on the above parameters, projecting the rotational linear array scanning image to its tangent plane by orthographic projection transformation to generate an equivalent frame image having the approximately same intrinsic and extrinsic parameters as the rotational linear array scanning image and calculate a coordinate of each pixel of the equivalent frame image projected onto the rotational linear array scanning image based on an inverse projection transformation calculation method; by using structure-from-motion method, automatically calculating a pose parameter of the equivalent frame image and a corresponding waypoint three-dimensional coordinate; with the pose parameter of the equivalent frame image as an initial value, obtaining an accurate imaging parameter of the rotational linear ar
    Type: Grant
    Filed: August 28, 2022
    Date of Patent: December 3, 2024
    Assignee: WUHAN UNIVERSITY
    Inventors: Zhaocong Wu, Zhao Yan
  • Patent number: 12159468
    Abstract: A vehicle occupant gaze detection system includes a non-transitory computer readable medium configured to store instructions thereon; and a processor connected to the non-transitory computer readable medium. The processor is configured to execute the instructions for receiving gaze data from a vehicle. The gaze data includes a viewing direction of an occupant of the vehicle, vehicle information, and time stamp information. The processor is configured to execute the instructions for generating a gridmap including an array of lattice points, and each lattice point of the array of lattice points corresponds to a location. The processor is configured to execute the instructions for generating a histogram including information related to viewing of at least one location by the occupant. The processor is configured to execute the instructions for determining an effectiveness of an object based on the histogram; and transmitting a recommendation based on the effectiveness of the object.
    Type: Grant
    Filed: October 8, 2021
    Date of Patent: December 3, 2024
    Assignee: WOVEN BY TOYOTA, INC.
    Inventor: Daisuke Hashimoto
  • Patent number: 12154315
    Abstract: Described herein is a system and computer implemented method of grouping clothing products by brands within a set of clothing images in an electronic catalog of an internet store serving online customers. Apply an object detection model to extract the dress section within the clothing image(s) to create preprocessed image(s). A machine learning model model is applied to the preprocessed image(s) to convert the image into a vector representation through an unsupervised technique. The vector contains the design features of the clothing image. The design features are representative of the brands. A clustering model is applied on the vector representations to arrive at the grouping of similar images of the clothing products. The grouped clothing products are displayed via a user interface, ordered by brands, to the online customers.
    Type: Grant
    Filed: January 6, 2022
    Date of Patent: November 26, 2024
    Assignee: Unisense Tech, Inc.
    Inventors: Bharat Vijay, Christopher Fase Sandman, Kyle Allen Norton, Ayyar Arun Balavenkatasubramanian, Srihari Padmanaban Venkatesan
  • Patent number: 12151706
    Abstract: A live map system may be used to propagate observations collected by autonomous vehicles operating in an environment to other autonomous vehicles and thereby supplement a digital map used in the control of the autonomous vehicles. In addition, a live map system in some instances may be used to propagate location-based teleassist triggers to autonomous vehicles operating within an environment. A location-based teleassist trigger may be generated, for example, in association with a teleassist session conducted between an autonomous vehicle and a remote teleassist system proximate a particular location, and may be used to automatically trigger a teleassist session for another autonomous vehicle proximate that location and/or to propagate a suggested action to that other autonomous vehicle.
    Type: Grant
    Filed: February 19, 2024
    Date of Patent: November 26, 2024
    Assignee: AURORA OPERATIONS, INC.
    Inventors: Niels Joubert, Benjamin Kaplan, Stephen O'Hara
  • Patent number: 12154318
    Abstract: An image recognition method, readable storage medium, and image recognition system are provided. The method includes: inputting an image, running a recognition engine to recognize and obtain species of a content in the image and obtain species recognition result, and running a non-category engine to determine whether the content belongs to a non-preset category; ignoring determination result of the non-category engine and obtaining category recognition result that the content belongs to a preset category if confidence of species recognition result is not less than first preset; obtaining category recognition result that the content belongs to the preset category if confidence of species recognition result is less than first preset and determination result of the non-category engine is no; obtaining category recognition result that the content belongs to the non-preset category if confidence of species recognition result is less than first preset and determination result of the non-category engine is yes.
    Type: Grant
    Filed: May 9, 2022
    Date of Patent: November 26, 2024
    Assignee: Hangzhou Ruisheng Software Co., Ltd.
    Inventors: Qingsong Xu, Qing Li
  • Patent number: 12154338
    Abstract: System, apparatus and method of image processing to detect a substance spill on a solid surface such as a floor is disclosed. First data representing a first image, captured by an image sensor, of a region including a solid surface, is received. A trained semantic segmentation neural network is applied to the first image data to determine, for each pixel of the first image, a spill classification value associated with the pixel, the determined spill classification value for a given pixel indicating the extent to which the trained semantic segmentation neural network estimates, based on its training, that the given pixel illustrates a substance spill. The presence of a substance spill on the solid surface is detected based on the determined spill classification values of the pixels of the first image.
    Type: Grant
    Filed: December 7, 2021
    Date of Patent: November 26, 2024
    Assignee: SeeChange Technologies Limited
    Inventors: Ariel Edgar Ruiz-Garcia, David Packwood, Michael Andrew Pallister
  • Patent number: 12154572
    Abstract: Systems, methods, and non-transitory computer readable media including instructions for interpreting facial skin micromovements are disclosed. Interpreting facial skin micromovements includes receiving during a first time period first signals representing prevocalization facial skin micromovements, and receiving during a second time period succeeding the first time period, second signals representing sounds. The sounds are analyzed to identify words spoken during the second time period, and the words are correlated with the prevocalization facial skin micromovements received during the first time period. The correlations are stored for future use. During a third time period, third signals representing facial skin micromovements are received in an absence of vocalization. Using the correlations, language associated with the third signals is identified and outputted.
    Type: Grant
    Filed: November 9, 2023
    Date of Patent: November 26, 2024
    Assignee: Q (Cue) Ltd.
    Inventor: Yonatan Wexler
  • Patent number: 12154343
    Abstract: An information acquisition support apparatus includes: an acquisition unit that acquires a plurality of notifications related to an event; an analysis unit that analyzes the plurality of notifications on the basis of an analysis criterion; and an event state generation unit that generates event state information obtained by integrating information indicated by the plurality of notifications on the basis of an analysis result of the plurality of notifications and an information integration criterion, and thereby, the information acquisition support apparatus supports rapid and accurate grasping of a state of a site where an event is occurring.
    Type: Grant
    Filed: August 28, 2023
    Date of Patent: November 26, 2024
    Assignee: NEC CORPORATION
    Inventors: Masaya Tanaka, Masako Wada, Masayuki Ogawa, Muneaki Onozato, Takamichi Yamazaki
  • Patent number: 12154310
    Abstract: Systems and methods for automatically determining and displaying salient portions of images are disclosed. According to certain aspects, an electronic device may support a design application that may apply a saliency detection learning model to a digital image, resulting in the application generating one or more salient portions of the digital image. The electronic device may generate a digital rendering of the salient portion of the image on digital models of items or products, and may enable a user to review the digital rendering. The user may also choose alternative salient portions of the digital image and/or aspect ratios for those salient portions for inclusion on a digital model of the item or product.
    Type: Grant
    Filed: May 28, 2021
    Date of Patent: November 26, 2024
    Assignee: CIMPRESS SCHWEIZ GMBH
    Inventors: Anshul Garg, Vikramaditya Khemka, Ajay Joshi
  • Patent number: 12155948
    Abstract: Various implementations disclosed herein include devices, systems, and methods that buffer events in device memory during synchronous readout of a plurality of frames by a sensor. Various implementations disclosed herein include devices, systems, and methods that disable a sensor communication link until the buffered events are sufficient for transmission by the sensor. In some implementations, the sensor using a synchronous readout may select a readout mode for one or more frames based on how many of the pixels are detecting events. In some implementations, a first mode that reads out only data for a low percentage of pixels that have events uses the device memory and a second mode bypasses the device memory based on accumulation criteria such as high percentage of pixels detecting events. In the second mode, less data per pixel may be readout.
    Type: Grant
    Filed: November 10, 2020
    Date of Patent: November 26, 2024
    Assignee: Apple Inc.
    Inventors: Nikolai E. Bock, Emanuele Mandelli, Gennady A. Manokhin
  • Patent number: 12154217
    Abstract: An electronic apparatus includes a display unit, a determining unit configured to determine a target object that a user is paying attention to, an acquiring unit configured to acquire a plurality of types of information on the target object, a referring unit configured to refer to reference information associated with the target object, and a control unit configured to determine the at least one of the plurality of types of information based on the reference information and the plurality of types of information, the display unit displays the at least one of the plurality of types of information and image data stored in a memory, and the reference information is past information of each of the plurality of types of information.
    Type: Grant
    Filed: June 13, 2022
    Date of Patent: November 26, 2024
    Assignee: CANON KABUSHIKI KAISHA
    Inventors: Tomoyuki Shiozaki, Hiroshi Toriumi, Atsushi Ichihara, Hiroki Kitanosako
  • Patent number: 12154341
    Abstract: Monitoring systems and methods for identifying an object of interest after the object of interest has undergone a change in appearance. One example provides an image sensor is configured to monitor a first area. A first electronic processor is configured to detect a first appearance of an object of interest within the first area, and determine a visual characteristic of the object of interest. The first electronic processor is configured to receive a first notification indicative of movement of the object of interest into a second area and an access input, and associate the visual characteristic of the object of interest with the access input. The first electronic processor is configured to detect a second appearance of the object of interest within the first area, and update the visual characteristic of the object of interest based on the second appearance of the object of interest.
    Type: Grant
    Filed: March 9, 2022
    Date of Patent: November 26, 2024
    Assignee: MOTOROLA SOLUTIONS, INC.
    Inventors: Pawel Wilkosz, Grzegorz Gustof, Marcin Kalinowski, Lukasz Osuch
  • Patent number: 12154327
    Abstract: A method, apparatus, and computer program product are provided for using aerial imagery to identify and distinguish fluid leaks from objects, structures, and shadows in aerial imagery. A method may include: receiving an aerial image of a geographic region; identifying objects within the aerial image; determining shadow areas associated with the objects within the aerial image; determining whether areas exist contiguous with the shadow areas and extending beyond the shadow areas; identifying one or more fluid leaks in response to areas existing contiguous with the shadow areas and extending beyond the shadow areas; and generating an indication of the one or more fluid leaks including one or more locations of the one or more fluid leaks.
    Type: Grant
    Filed: January 5, 2022
    Date of Patent: November 26, 2024
    Assignee: HERE GLOBAL B.V.
    Inventor: Priyank Sameer
  • Patent number: 12155924
    Abstract: An image processing device detects a subject inside a captured image, acquires information of a first subject area and a second subject area that is a part thereof, and detects features of the subject areas. The image processing device calculates evaluation values from the detected feature points. If the second subject area is detected, control of displaying a display frame corresponding to the subject area on the display unit is performed. In addition, if the second subject area is not detected, control of displaying ranging position candidate frames as evaluation values to be displayed on the display unit is performed.
    Type: Grant
    Filed: January 13, 2022
    Date of Patent: November 26, 2024
    Assignee: Canon Kabushiki Kaisha
    Inventors: Yukihiro Kogai, Yasushi Ohwa, Hiroyuki Yaguchi, Takahiro Usami, Hiroyasu Katagawa, Tomotaka Uekusa, Toru Aida
  • Patent number: 12153732
    Abstract: The present application provides a gaze point estimation method, device, and an electronic device. The method includes: acquiring user image data; acquiring a facial feature vector according to a preset first convolutional neural network and the facial image; acquiring a position feature vector according to a preset first fully connected network and the position data; acquiring a binocular fusion feature vector according to a preset eye feature fusion network, the left-eye image and the right-eye image; and acquiring position information about a gaze point of a user according to a preset second fully connected network, the facial feature vector, the position feature vector, and the binocular fusion feature vector. In this technical solution, relation between eye images and face images is utilized to achieve accurate gaze point estimation.
    Type: Grant
    Filed: December 8, 2021
    Date of Patent: November 26, 2024
    Assignee: BEIHANG UNIVERSITY
    Inventors: Feng Lu, Yiwei Bao, Qinping Zhao
  • Patent number: 12155969
    Abstract: A system for outputting a surveillance video, the system comprises a plurality of cameras each of which is mounted on a movable object, one or more memories storing, for each of the plurality of cameras, captured video data and time series position data, and one or more processors. The one or more processors execute selecting, for each predetermined time width, an observation camera which is one camera with the time series position data in which data within the predetermined time width are included in an observation area, and acquiring the captured video data within the predetermined time width of the observation camera. Then the one or more processors execute outputting the captured video data acquired for each of the predetermined time width in chronological order. Wherein the selecting the observation camera includes selecting one camera with the longest distance to travel in the observation area within the predetermined time width.
    Type: Grant
    Filed: December 6, 2022
    Date of Patent: November 26, 2024
    Assignee: TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventor: Norimasa Kobori
  • Patent number: 12154283
    Abstract: Disclosed is a ship detection and tracking method and device, the method comprises: pre-training a yolov5s network model with a first preset number of high-speed moving object datasets to obtain an initial ship detection model; adding a preset attention mechanism to the initial ship detection model to obtain a transitional ship detection model; inputting a second preset number of annotated ship datasets into the initial ship detection model with the preset attention mechanism for training to obtain a target ship detection model; inputting an original ship video frame into the target ship detection model, and outputting the position information of the ship to be tracked; performing ship video image tracking on the ship to be tracked according to the position information. This disclosure performs pre-training, which only requires a small amount of annotated ship data to determine the target ship detection model, achieving video image tracking of the target ship.
    Type: Grant
    Filed: December 22, 2023
    Date of Patent: November 26, 2024
    Assignee: Wuhan University of Technology
    Inventors: Linying Chen, Yamin Huang, Pengfei Chen, Junmin Mou
  • Patent number: 12153212
    Abstract: An eye-tracking apparatus. At least one light source and a plurality of sensors are arranged at a periphery of a first surface of at least one lens. A frame is employed to hold the at least one lens. A processor coupled to the at least one light source and the plurality of sensors is configured to control the at least one light source to emit light towards the user's eye, control the plurality of sensors to sense reflections of the light off a surface of the user's eye, and process sensor data pertaining to the sensed reflections to determine a gaze direction of the user's eye.
    Type: Grant
    Filed: September 29, 2021
    Date of Patent: November 26, 2024
    Assignee: Pixieray Oy
    Inventors: Klaus Melakari, Qing Xu, Ville Miettinen, Niko Eiden
  • Patent number: 12155971
    Abstract: The method comprising determining a set of coordinates each for two or more appearances of a target subject within a sequence of images, the set of coordinates of the two or more appearances of the target subject defining a first path; determining a set of coordinates each for two or more appearances of a related subject within a sequence of images, the related subject relating to the target subject, the set of coordinates of the two or more appearances of the related subject defining a second path; determining one or more minimum distances between the first path and the second path so as to determine at least a region of interest; determining a timestamp of a first appearance and a timestamp of a last appearance of the target subject; and determining a timestamp of a first appearance and a timestamp of a last appearance of the related subject.
    Type: Grant
    Filed: August 10, 2023
    Date of Patent: November 26, 2024
    Assignee: NEC CORPORATION
    Inventors: Hui Lam Ong, Satoshi Yamazaki, Hong Yen Ong, Wei Jian Peh
  • Patent number: 12148178
    Abstract: A pattern matching unit carries out a pattern matching between a photographed image obtained by photographing a workpiece with a monocular camera and a first plurality of models having a plurality of sizes and a plurality of angles, and selects a model having a size and an angle with the highest degree of matching. A primary detection unit detects a position and an angle of an uppermost workpiece based on the selected model. An actual load height calculation unit calculates an actual load height of the uppermost workpiece based on a hand height. A secondary detection unit re-detects the position and the angle of the uppermost workpiece based on a model having a size and an angle with the highest degree of matching selected by carrying out a pattern matching between the photographed image and a second plurality of models selected or newly created based on the actual load height.
    Type: Grant
    Filed: June 3, 2020
    Date of Patent: November 19, 2024
    Assignee: AMADA CO., LTD.
    Inventors: Teruyuki Kubota, Takeshi Washio, Satoshi Takatsu, Naohumi Miura, Shuuhei Terasaki
  • Patent number: 12147243
    Abstract: A robot includes an image sensor that captures the environment of a storage site. The robot visually recognizes regularly shaped structures to navigate through the storage site using various object detection and image segmentation techniques. In response to receiving a target location in the storage site, the robot moves to the target location along a path. The robot receives the images as the robot moves along the path. The robot analyzes the images captured by the image sensor to determine the current location of the robot in the path by tracking a number of regularly shaped structures in the storage site passed by the robot. The regularly shaped structures may be racks, horizontal bars of the racks, and vertical bars of the racks. The robot can identify the target location by counting the number of rows and columns that the robot has passed.
    Type: Grant
    Filed: May 11, 2023
    Date of Patent: November 19, 2024
    Assignee: Brookhurst Garage, Inc.
    Inventors: Young Joon Kim, Kyuman Lee, Sunyou Hwang
  • Patent number: 12148209
    Abstract: The following relates generally to light detection and ranging (LIDAR) and artificial intelligence (AI). In some embodiments, a system: receives LIDAR data generated from a LIDAR camera; measures a plurality of dimensions of a room of the home based upon processor analysis of the LIDAR data; builds a 3D model of the room based upon the measured plurality of dimensions; receives an indication of a proposed change to the room; modifies the 3D model to include the proposed change to the room; and displays a representation of the modified 3D model.
    Type: Grant
    Filed: August 19, 2022
    Date of Patent: November 19, 2024
    Assignee: STATE FARM MUTUAL AUTOMOBILE INSURANCE COMPANY
    Inventors: Nicholas Carmelo Marotta, Laura Kennedy, J D Johnson Willingham
  • Patent number: 12148060
    Abstract: A vehicle having one or more cameras, configured to record one or more images of a person approaching the vehicle. The camera(s) can be configured to send biometric data derived from the image(s). The vehicle can include a computing system configured to receive the biometric data and to determine a risk score of the person based on the received biometric data and an AI technique, such as an ANN or a decision tree. The received biometric data or a derivative thereof can be input for the AI technique. The computing system can also be configured to determine whether to notify a driver of the vehicle of the risk score based on the risk score exceeding a risk threshold. The vehicle can also include a user interface, configured to output the risk score to notify the driver when the computing system determines the risk score exceeds the risk threshold.
    Type: Grant
    Filed: October 20, 2022
    Date of Patent: November 19, 2024
    Assignee: Lodestar Licensing Group LLC
    Inventor: Robert Richard Noel Bielby
  • Patent number: 12149952
    Abstract: Neighbor cell planning method and device based on Thiessen polygons are provided. The method includes acquiring working parameter data of cells; adjusting longitudes and latitudes of the cells based on the working parameter data; constructing a Delaunay triangulation based on the adjusted longitudes and latitudes of the cells; generating Thiessen polygons based on the Delaunay triangulation; and calculating neighbor cells of each cell based on the Thiessen polygons and exporting a list of neighbor cells for each cell.
    Type: Grant
    Filed: June 24, 2020
    Date of Patent: November 19, 2024
    Assignee: NANJING HOWSO TECHNOLOGY CO., LTD.
    Inventors: Jibin Wang, Lu Hou, Dalong Chen
  • Patent number: 12147496
    Abstract: Systems and methods to automatically generate training data for machine learning models may include an imaging device to capture imaging data, an image processing or rendering system to receive the imaging data and render a three-dimensional model of an object of interest overlaying the imaging data, an automatic mask extraction or generation system to extract or determine a mask, label, or annotation associated with the three-dimensional model and a plurality of pixels associated with the object of interest from a perspective of the imaging device, and a machine learning model to receive the imaging data and the mask as training data.
    Type: Grant
    Filed: November 3, 2021
    Date of Patent: November 19, 2024
    Assignee: Amazon Technologies, Inc.
    Inventor: Francesco Giuseppe Callari
  • Patent number: 12148218
    Abstract: This disclosure is directed to techniques in which a first user in an environment scans visual indicia associated with an item, such as a barcode, before handing the item to a second user. One or more computing devices may receive an indication of the scan, retrieve image data of the interaction from a camera within the environment, identify the user that received the item, and update a virtual cart associated with the second user to indicate addition of the item.
    Type: Grant
    Filed: June 2, 2023
    Date of Patent: November 19, 2024
    Assignee: Amazon Technologies, Inc.
    Inventors: Ivan Stankovic, Joseph M Alyea, Jiajun Zhao, Kartik Muktinutalapati, Waqas Ahmed, Dilip Kumar, Danny Guan, Nishitkumar Ashokkumar Desai, Longlong Zhu
  • Patent number: 12148185
    Abstract: Embodiments are directed to parameter adjustment for sensors. A calibration model and a calibration profile for a sensor may be provided. Calibration parameters associated with the sensor may be determined based on the calibration profile. The sensor may be configured to use a value of the calibration parameter based on the calibration profile. Trajectories may be generated based on a stream of events from the sensor. Metrics associated with the sensor events or the trajectories may be determined. If a metric value may be outside of a control range, further actions may be iteratively performed, including: modifying the value of the calibration parameter based on the calibration model; configuring the sensor to use the modified value of the calibration parameter; redetermining the metrics based on additional trajectories; if the metric is within a control range, the iteration may be terminated and the calibration profile may be updated.
    Type: Grant
    Filed: July 15, 2022
    Date of Patent: November 19, 2024
    Assignee: Summer Robotics, Inc.
    Inventors: Schuyler Alexander Cullen, Brian Alexander Paden
  • Patent number: 12148203
    Abstract: A method for content-aware type-on-path generation is implemented via a computing system including a processor. The method includes surfacing an image via a graphics GUI of a graphics application and detecting one or more salient objects within the image using a CNN model. The method also includes generating a contour map for each detected salient object and generating a path along the contours of each salient object by applying a defined offset to the corresponding contour map. The method further includes applying input text characters as type-on-path along the generated path based at least on user input received via the graphics GUI.
    Type: Grant
    Filed: May 16, 2022
    Date of Patent: November 19, 2024
    Assignee: Microsoft Technology Licensing, LLC.
    Inventor: Mrinal Kumar Sharma
  • Patent number: 12142138
    Abstract: There is provided an information processing apparatus according to the present disclosure includes a recognition processing portion that performs recognition processing of a recognition target based on a captured image of a surrounding environment of a mobile body, and an output control portion that causes, when the recognition target is not recognized, an output apparatus to output non-recognition notification information indicating that the recognition target is not recognized.
    Type: Grant
    Filed: February 25, 2021
    Date of Patent: November 12, 2024
    Assignee: SONY GROUP CORPORATION
    Inventor: Keitaro Yamamoto
  • Patent number: 12142032
    Abstract: An embodiment of a semiconductor package apparatus may include technology to pre-process an image to simplify a background of the image, and perform object detection on the pre-processed image with the simplified background. For example, an embodiment of a semiconductor package may include technology to pre-process an image to subtract the background from the image and perform object detection on the pre-processed image with the background subtracted. Other embodiments are disclosed and claimed.
    Type: Grant
    Filed: January 10, 2022
    Date of Patent: November 12, 2024
    Assignee: Intel Corporation
    Inventors: Yuming Li, Zhen Zhou, Xiaodong Wang, Quan Yin
  • Patent number: 12143756
    Abstract: A system may track a person at a location across multiple cameras in real time, while maintaining identification of the person through their visit. The person may not be personally identified; however, the person's presence in video frames captured by different cameras may be linked together. The linked frames may be evaluated by one or more analytics applications. For example, an analytics application may determine a trajectory of a person through a location and plot an overlay of that person's trip through the location on the map. This information may be used, for example, to optimize store layout, asset protection purposes, for counting a number of people at the location at a given time (e.g., for understanding busy hours of the location), determining whether social distancing or other health and/or safety measures may be in compliance, etc.
    Type: Grant
    Filed: October 8, 2021
    Date of Patent: November 12, 2024
    Assignee: Target Brands, Inc.
    Inventors: Tapan Shah, Ramanan Muthuraman, Nicholas Scott Eggert
  • Patent number: 12142003
    Abstract: A system and method of detecting display fit measurements and/or ophthalmic measurements for a head mounted wearable computing device including a display device is provided. An image of a fitting frame worn by a user of the computing device is captured by the user, through an application running on the computing device. One or more visual markers each including a distinct pattern are detected in the image including the fitting frame. A model of the fitting frame, and configuration information associated with the fitting frame, are determined based on the detection of the pattern. A three-dimensional pose of the fitting frame is determined based on the detected visual marker(s) and patterns, and the configuration information associated with the fitting frame. The display device of the head mounted wearable computing device can then be configured based on the three-dimensional pose of the fitting frame as captured in the image.
    Type: Grant
    Filed: April 23, 2021
    Date of Patent: November 12, 2024
    Assignee: Google LLC
    Inventors: Idris Syed Aleem, Rees Anwyl Samuel Simmons, Sushant Umesh Kulkarni, Ahmed Gawish, Mayank Bhargava