Target Tracking Or Detecting Patents (Class 382/103)
  • Patent number: 11977979
    Abstract: Systems and techniques are provided for generating one or more models. For example, a process can include obtaining a plurality of input images corresponding to faces of one or more people during a training interval. The process can include determining a value of the coefficient representing at least the portion of the facial expression for each of the plurality of input images during the training interval. The process can include determining, from the determined values of the coefficient representing at least the portion of the facial expression for each of the plurality of input images during the training interval, an extremum value of the coefficient representing at least the portion of the facial expression during the training interval. The process can include generating an updated bounding value for the coefficient representing at least the portion of the facial expression based on the initial bounding value and the extremum value.
    Type: Grant
    Filed: July 23, 2021
    Date of Patent: May 7, 2024
    Assignee: QUALCOMM Incorporated
    Inventors: Kuang-Man Huang, Min-Hui Lin, Ke-Li Cheng, Michel Adib Sarkis
  • Patent number: 11978263
    Abstract: A method for determining a safe state for a vehicle includes disposing a camera at a vehicle and disposing an electronic control unit (ECU) at the vehicle. Frames of image data are captured by the camera and provided to the ECU. An image processor of the ECU processes frames of image data captured by the camera. A condition is determined via processing at the image processor of the ECU frames of image data captured by the camera. The condition includes a shadow present in the field of view of the camera within ten frames of image data captured by the camera or a damaged condition of the imager within two minutes of operation of the camera. The ECU determines a safe state for the vehicle responsive to determining the condition.
    Type: Grant
    Filed: May 22, 2023
    Date of Patent: May 7, 2024
    Assignee: MAGNA ELECTRONICS INC.
    Inventors: Horst D. Diessner, Richard C. Bozich, Aleksandar Stefanovic, Anant Kumar Lall, Nikhil Gupta
  • Patent number: 11978219
    Abstract: A method for determining motion information is provided to be executed by a computing device. The method includes: determining a first image and a second image, the first image and the second image each including an object; determining a first pixel region based on a target feature point on the object in the first image; determining a second pixel region according to a pixel difference among a plurality of first pixel points in the first pixel region and further according to the target feature point; and obtaining motion information of the target feature point according to the plurality of second pixel points and the second image, the motion information being used for indicating changes in locations of the target feature point in the first image and the second image.
    Type: Grant
    Filed: May 18, 2021
    Date of Patent: May 7, 2024
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Yonggen Ling, Shenghao Zhang
  • Patent number: 11978264
    Abstract: Systems and methods for constructing and managing a unique road sign knowledge graph across various countries and regions is disclosed. The system utilizes machine learning methods to assist humans when comparing a new sign template with a plurality of stored sign templates to reduce or eliminate redundancy in the road sign knowledge graph. Such a machine learning method and system is also used in providing visual attributes of road signs such as sign shapes, colors, symbols, and the like. If the machine learning determines that the input road sign template is not found in the road sign knowledge graph, the input sign template can be added to the road sign knowledge graph. The road sign knowledge graph can be maintained to add signs templates that are not already in the knowledge graph but are found in real-world by integrating human annotator's feedback during ground truth generation for machine learning.
    Type: Grant
    Filed: August 17, 2021
    Date of Patent: May 7, 2024
    Assignee: Robert Bosch GmbH
    Inventors: Ji Eun Kim, Kevin H. Huang, Mohammad Sadegh Norouzzadeh, Shashank Shekhar
  • Patent number: 11978181
    Abstract: Apparatuses, systems, and techniques to process luminance and/or radiance values of one or more images from one or more cameras using one or more neural networks to perform a machine vision task. In at least one embodiment, one or more neural networks determine detection difficulty levels of objects within the one or more images and performs a machine vision task based on the determined detection difficulty levels of objects within images associated with that ask.
    Type: Grant
    Filed: December 11, 2020
    Date of Patent: May 7, 2024
    Assignee: NVIDIA Corporation
    Inventors: Sean Midthun Pieper, Robin Brian Jenkin
  • Patent number: 11978217
    Abstract: A long-term object tracker employs a continuous learning framework to overcome drift in the tracking position of a tracked object. The continuous learning framework consists of a continuous learning module that accumulates samples of the tracked object to improve the accuracy of object tracking over extended periods of time. The continuous learning module can include a sample pre-processor to refine a location of a candidate object found during object tracking, and a cropper to crop a portion of a frame containing a tracked object as a sample and to insert the sample into a continuous learning database to support future tracking.
    Type: Grant
    Filed: January 3, 2019
    Date of Patent: May 7, 2024
    Assignee: Intel Corporation
    Inventors: Lidan Zhang, Ping Guo, Haibing Ren, Yimin Zhang
  • Patent number: 11975607
    Abstract: During an at least partially autonomous driving operation of a motor vehicle, an arrangement and mode of action of environmental sensors of the motor vehicle, the sensor data of which are used in the at least partially autonomous driving operation, is visualized by use of a display device arranged in the motor vehicle. The motor vehicle includes a data interface for wirelessly transmitting data to the display device.
    Type: Grant
    Filed: May 21, 2019
    Date of Patent: May 7, 2024
    Assignee: AUDI AG
    Inventor: Marcus Kühne
  • Patent number: 11978138
    Abstract: There is provided with an information processing apparatus. A display control unit displays, on an image, a polygon having vertices at respective positions of candidates for at least three detection targets in the image. A determining unit determines, as the at least three detection targets, the candidates for the at least three detection targets, based on user input. A calculating unit calculates a parameter for estimating a size of a detection target that corresponds to a respective position in the image, based on positions and sizes of the determined at least three detection targets.
    Type: Grant
    Filed: May 25, 2022
    Date of Patent: May 7, 2024
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Wataru Mashiko
  • Patent number: 11978155
    Abstract: An apparatus to facilitate inferred object shading is disclosed. The apparatus comprises one or more processors to receive rasterized pixel data and hierarchical data associated with one or more objects and perform an inferred shading operation on the rasterized pixel data, including using one or more trained neural networks to perform texture and lighting on the rasterized pixel data to generate a pixel output, wherein the one or more trained neural networks uses the hierarchical data to learn a three-dimensional (3D) geometry, latent space and representation of the one or more objects.
    Type: Grant
    Filed: September 25, 2020
    Date of Patent: May 7, 2024
    Assignee: Intel Corporation
    Inventors: Selvakumar Panneer, Mrutunjayya Mrutunjayya, Carl S. Marshall, Ravishankar Iyer, Zack Waters
  • Patent number: 11975738
    Abstract: A first image can be acquired from a first sensor included in a vehicle and input to a deep neural network to determine a first bounding box for a first object. A second image can be acquired from the first sensor. Input latitudinal and longitudinal motion data from second sensors included in the vehicle corresponding to the time between inputting the first image and inputting the second image. A second bounding box can be determined by translating the first bounding box based on the latitudinal and longitudinal motion data. The second image can be cropped based on the second bounding box. The cropped second image can be input to the deep neural network to detect a second object. The first image, the first bounding box, the second image, and the second bounding box can be output.
    Type: Grant
    Filed: June 3, 2021
    Date of Patent: May 7, 2024
    Assignee: Ford Global Technologies, LLC
    Inventors: Gurjeet Singh, Apurbaa Mallik, Rohun Atluri, Vijay Nagasamy, Praveen Narayanan
  • Patent number: 11977980
    Abstract: An information processing method includes the following executed by a computer: acquiring a first image and object data of an object appearing in the first image, extracting a portion of the first image that corresponds to a difference between the object data and an object detection result obtained by inputting the first image to a trained model, the trained model receiving an image as input to output an object detection result, acquiring a second image that includes a portion corresponding to the same object data as object data corresponding to the extracted portion of the first image, reflecting an image based on the extracted portion of the first image in the portion of the acquired second image that corresponds to the same object data, and generating training data for the trained model.
    Type: Grant
    Filed: August 10, 2021
    Date of Patent: May 7, 2024
    Assignee: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.
    Inventors: Masaki Takahashi, Kazunobu Ishikawa, Yusuke Tsukamoto, Shota Onishi
  • Patent number: 11977604
    Abstract: A method, device and apparatus for recognizing, categorizing and searching for a garment, and a storage medium. The method for recognizing a garment comprises: acquiring a target image containing a garment to be recognized, and determining, on the basis of the target image, a set of heat maps corresponding to key feature points contained in the target image, the set of heat maps comprising position probability heat maps corresponding to the respective key feature points contained in the target image (101); and processing the set of heat maps on the basis of a shape constraint corresponding to the target image, and determining position probability information of the key feature points contained in the target image (102).
    Type: Grant
    Filed: December 23, 2019
    Date of Patent: May 7, 2024
    Assignees: Beijing Jingdong Shangke Information Tech Co., Ltd, Beijing Jingdong Century Trading Co., Ltd.
    Inventor: Hongbin Xie
  • Patent number: 11978283
    Abstract: Systems and methods are provided for performing operations comprising: capturing, by an electronic mirroring device, a video feed received from a camera of the electronic mirroring device, the video feed depicting a user; displaying, by one or more processors of the electronic mirroring device, one or more menu options on the video feed that depicts the user; identifying a hand of the user in the video feed; determining that a position of the hand in the video feed overlaps a position of a given menu option of the one or more menu options; and performing an operation associated with the given menu option in response to determining that the position of the hand in the video feed overlaps the position of the given menu option.
    Type: Grant
    Filed: March 16, 2021
    Date of Patent: May 7, 2024
    Assignee: Snap Inc.
    Inventors: Dylan Shane Eirinberg, Kyle Goodrich, Andrew James McPhee, Daniel Moreno
  • Patent number: 11972578
    Abstract: A method and system for tracking an object in an input video using online training includes a step for training a classifier model by using global pattern matching, and a step for classifying and tracking each target through online training including the classifier model.
    Type: Grant
    Filed: August 27, 2021
    Date of Patent: April 30, 2024
    Assignee: NAVER CORPORATION
    Inventors: Myunggu Kang, Dongyoon Wee, Soonmin Bae
  • Patent number: 11972042
    Abstract: A gaze detection assembly comprises a plurality of infrared (IR) light emitters configured to emit IR light toward a user eye and an IR camera configured to sequentially capture IR images of the user eye. A controller is configured to, during a reference frame, control the plurality of IR light emitters to emit IR light toward the user eye with a reference intensity distribution. During the reference frame, an IR image is captured depicting a first glint distribution on the user eye. Based at least in part on the first glint distribution, during a subsequent frame, the plurality of IR light emitters are controlled to emit IR light toward the user eye with a subsequent intensity distribution, different than the reference intensity distribution. During the subsequent frame, a second IR image is captured depicting a second glint distribution on the user eye.
    Type: Grant
    Filed: June 15, 2021
    Date of Patent: April 30, 2024
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Benjamin Eliot Lundell, David C Rohn, Curtis Alan Tesdahl, Marko Bezulj, Christopher Charles Aholt, Navid Poulad
  • Patent number: 11973732
    Abstract: A system comprises one or more processors of a machine and a memory storing instructions that, when executed by the one or more processors, cause the machine to perform operations. The operations comprise: receiving an image; generating an avatar with a trained neural network based on the image, the trained neural network predicting multiple trait values for the avatar; and sending a message with the generated avatar.
    Type: Grant
    Filed: February 16, 2021
    Date of Patent: April 30, 2024
    Assignee: Snap Inc.
    Inventors: Caner Berkay Antmen, Michal Dobrogost
  • Patent number: 11970246
    Abstract: A ship cabin loading capacity measurement method and apparatus thereof, comprises: acquiring point cloud measurement data of a ship cabin; optimizing the point cloud measurement data according to a predetermined point cloud data processing rule, and generating optimized ship cabin point cloud data; calculating said ship cabin point cloud data with a predetermined loading capacity calculation rule, and getting ship cabin loading capacity data. According to the ship cabin loading capacity measurement method of the present invention, the point cloud measurement data can be acquired by a lidar, and processing the point cloud measurement data of the ship cabin with a predetermined point cloud data processing law and a computation law, and as the point cloud data processing law and the computation law can be deployed in a computer device in advance, after point cloud measurement data acquisition, loading capacity of a ship cabin can be acquired quickly and precisely.
    Type: Grant
    Filed: February 5, 2021
    Date of Patent: April 30, 2024
    Assignee: Zhoushan Institute of Calibration and Testing for Quality and Technology Supervision
    Inventors: Huadong Hao, Cunjun Li, Xianlei Chen, Haolei Shi, Ze'nan Wu, Junxue Chen, Zhengqian Shen, Yingying Wang, Huizhong Xu
  • Patent number: 11971694
    Abstract: An abnormal-sound detection device has an imaging unit, an operation range identification unit, a sound collection unit, an abnormal-sound detection unit, an abnormal-sound generation position identification unit, and an abnormal-sound source determination unit. The operation range identification unit identifies and stores the operation range of a diagnosis object on the basis of the image captured by an imaging unit. The abnormal-sound detection unit detects abnormalities in sounds included in the sounds collected by the sound collection unit, the sounds arriving from the diagnosis object. When an abnormality in a sound is detected by the abnormal-sound detection unit, the abnormal-sound generation position identification unit identifies the position at which the abnormality of the sound was generated.
    Type: Grant
    Filed: February 15, 2019
    Date of Patent: April 30, 2024
    Assignee: NEC CORPORATION
    Inventor: Mitsuru Sendoda
  • Patent number: 11972579
    Abstract: Disclosed herein is a system and method directed to object tracking using plurality of cameras. The system includes the plurality of cameras disposed around a playing surface in a mirrored configuration, and where the plurality of cameras are time-synchronized. The system further includes logic that, when executed by a processor, causes performance of operations including: obtaining a sequence of images from the plurality of cameras, continuously detecting an object in image pairs at successive points in time, wherein each image pair corresponds to a single point in time, continuously determining a location of the object within the playing space through triangulation of the object within each image pair, determining wall coordinates of a wall that the object is expected to contact based on the continuously determined location of the object and causing rendering of a visual graphic based on the wall coordinates.
    Type: Grant
    Filed: November 28, 2022
    Date of Patent: April 30, 2024
    Assignee: TOCA Football, Inc.
    Inventor: Conrad Spiteri
  • Patent number: 11971957
    Abstract: This disclosure describes techniques to aggregate sensor data. The techniques perform operations comprising: receiving, from a first sensor, a first profile representing a first set of movement attributes detected by the first sensor in an area at a given point in time; receiving, from a second sensor, a second profile representing a second set of movement attributes detected by the second sensor in the area at the given point in time; computing a similarity measure between the first and second sets of movement attributes of the first and second profiles; determining that the similarity measure exceeds a threshold value; and in response to determining that the similarity measure exceeds the threshold value, associating the first and second profiles with a same first object that is in the area at the given point in time.
    Type: Grant
    Filed: December 22, 2020
    Date of Patent: April 30, 2024
    Assignee: Analog Devices International Unlimited Company
    Inventors: Qian Zhang, Sudong Shu, Rajesh Mahapatra, Raka Singh, Michael L. Long
  • Patent number: 11965967
    Abstract: A computer implemented scheme for a light detection and ranging (LIDAR) system where point cloud feature extraction and segmentation by efficiently is achieved by: (1) data structuring; (2) edge detection; and (3) region growing.
    Type: Grant
    Filed: November 16, 2022
    Date of Patent: April 23, 2024
    Assignee: Oregon State University
    Inventors: Erzhuo Che, Michael Olsen
  • Patent number: 11966517
    Abstract: A human interface including steps of presenting an image, then receiving a gesture from the user. The image is analyzed to identify the elements of the image and then compared to known images and then either soliciting an input from the user or displaying a menu to the user. Comparing the image and/or graphical image elements may be effectuated using a trained artificial intelligence engine or, in some embodiments, with a structured data source, said data source including predetermined images and menu options. If the input from the user is known, then presenting a predetermined menu. If the image is not known, then presenting an image or other menu options, and soliciting from the user the desired options. Once the user selections an option, the resulting selection may be used to further train the AI system or added to the structured data source for future reference.
    Type: Grant
    Filed: July 26, 2022
    Date of Patent: April 23, 2024
    Inventor: Richard Terrell
  • Patent number: 11966047
    Abstract: The present disclosure generally relates to the field of eye tracking systems. An eye tracking system is provided. The eye tracking system comprises an illuminator arrangement, including at least one light source, configured to illuminate an eye of a user. The eye tracking system is configured to enable a reduction of reflections from an optic arrangement (e.g., a pair of glasses) that is located in a light beam path between the illuminator arrangement and the eye when the eye tracking system is in use. The illuminator arrangement is configured to emit p-polarized light to be incident on a surface of the optic arrangement at an angle corresponding to, or substantially corresponding to, Brewster's angle.
    Type: Grant
    Filed: September 28, 2020
    Date of Patent: April 23, 2024
    Assignee: Tobii AB
    Inventor: Magnus Arvidsson
  • Patent number: 11967104
    Abstract: A method of determining the location of an object in a camera field of view is disclosed. The method may utilize a camera system that includes a digital video camera and a computer that runs image analysis software for analyzing images received from the camera. The image analysis software is configured to isolate pixel groups in each image. The method includes the steps of locating an object in the camera's field of view in which the object is represented by a set of pixels; identifying the centroid of the set of pixels representing the object; and calculating the location of the object relative to the camera based upon known parameters including the focal plane and camera lens location.
    Type: Grant
    Filed: November 29, 2021
    Date of Patent: April 23, 2024
    Assignee: United States of America as represented by the Secretary of the Air Force
    Inventors: Anthony Ligouri, Hayk Azatyan, William Erwin, Travis Rennich, David Shald, Adam Warren
  • Patent number: 11967086
    Abstract: A method for trajectory generation based on player tracking is described herein. The method includes determining a temporal association for a first player in a captured field of view and determining a spatial association for the first player. The method also includes deriving a global player identification based on the temporal association and the spatial association and generating a trajectory based on the global player identification.
    Type: Grant
    Filed: July 31, 2019
    Date of Patent: April 23, 2024
    Assignee: INTEL CORPORATION
    Inventors: Yikai Fang, Qiang Li, Wenlong Li, Chenning Liu, Chen Ling, Hongzhi Tao, Yumeng Wang, Hang Zheng
  • Patent number: 11966838
    Abstract: In various examples, a machine learning model—such as a deep neural network (DNN)—may be trained to use image data and/or other sensor data as inputs to generate two-dimensional or three-dimensional trajectory points in world space, a vehicle orientation, and/or a vehicle state. For example, sensor data that represents orientation, steering information, and/or speed of a vehicle may be collected and used to automatically generate a trajectory for use as ground truth data for training the DNN. Once deployed, the trajectory points, the vehicle orientation, and/or the vehicle state may be used by a control component (e.g., a vehicle controller) for controlling the vehicle through a physical environment. For example, the control component may use these outputs of the DNN to determine a control profile (e.g., steering, decelerating, and/or accelerating) specific to the vehicle for controlling the vehicle through the physical environment.
    Type: Grant
    Filed: May 10, 2019
    Date of Patent: April 23, 2024
    Assignee: NVIDIA Corporation
    Inventors: Urs Muller, Mariusz Bojarski, Chenyi Chen, Bernhard Firner
  • Patent number: 11967122
    Abstract: Systems, methods, and computer-readable media are disclosed for context aware verification for sensor pipelines. Autonomous vehicles (AVs) may include an extensive number of sensors to provide sufficient situational awareness to perception and control systems of the AV. For those systems to operate reliably, the data coming from the different sensors should be checked for integrity. To this end, the systems and methods described herein may use contextual clues to ensure that the data coming from the different the sensors is reliable.
    Type: Grant
    Filed: April 19, 2021
    Date of Patent: April 23, 2024
    Assignee: ARGO AI, LLC
    Inventors: Michel H. J. Laverne, Dane P. Bennington
  • Patent number: 11967105
    Abstract: A system and method are provided. The method comprises obtaining a camera live stream from a camera in a user device, the camera live stream including image data of a particular product; determining one or more image features common to images of one or more products based at least on image analysis of image data of the images of the one or more products; comparing the one or more image features to one or more image features of the image data of the particular product to generate one or more potential adjustments to the one or more image features of the image data of the particular product; and providing, for presentation together with the camera live stream on the user device, at least one indication based on the one or more potential adjustments to the one or more image features of the image data of the particular product.
    Type: Grant
    Filed: February 6, 2023
    Date of Patent: April 23, 2024
    Assignee: Shopify Inc.
    Inventors: Benjamin Lui, Guduru Sai Nihas, Salim Batlouni
  • Patent number: 11967092
    Abstract: A system and method for detection-guided tracking of human-dynamics is provided. The system receives an input human-dynamics sequence including geometry information and an RGB video of a human object. The system inputs the RGB video to the neural network and estimates a pose of the human object in each frame of the RGB video based on output of the neural network for the input. The system selects, from the input human-dynamics sequence, a key-frame for which the estimated pose is closest to a reference human pose. From the selected key-frame and up to a number of frames of the input human-dynamics sequence, the system generates a tracking sequence for a 3D human mesh of the human object. The generated tracking sequence includes final values of parameters of articulate motion and non-rigid motion of the 3D human mesh. Based on the generated tracking sequence, the system generates a free-viewpoint video.
    Type: Grant
    Filed: November 26, 2019
    Date of Patent: April 23, 2024
    Assignee: SONY GROUP CORPORATION
    Inventor: Qing Zhang
  • Patent number: 11964653
    Abstract: A driving assistance system includes a processor and a memory that stores surroundings information indicating the surroundings of a vehicle detected by sensors mounted on the vehicle. The processor is configured to acquire the position of a target in front of the vehicle and the position of the boundary of a roadway area in front of the vehicle based on the surroundings information. The processor is configured to determine whether the target is in the roadway area based on the position of the target and the position of the boundary. The processor is configured to calculate the distance between the target and the boundary when the target is in the roadway area. The processor is configured to determine whether the target is crossing the roadway area based on the relationship between the distance and a time.
    Type: Grant
    Filed: September 24, 2021
    Date of Patent: April 23, 2024
    Assignee: TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventor: Wataru Sasagawa
  • Patent number: 11967268
    Abstract: Imaging systems and techniques are described. An imaging system causes a display to display light according to a predefined pattern. The imaging system receives image data of a scene from an image sensor. The image data is captured using the image sensor while the display is configured to display the light according to the predefined pattern. The imaging system processes the image data to generate at least one image frame of the scene based on detection of the predefined pattern in the image data. The imaging system outputs the at least one image frame of the scene.
    Type: Grant
    Filed: November 7, 2022
    Date of Patent: April 23, 2024
    Assignee: QUALCOMM Incorporated
    Inventors: Nikhil Verma, Prakasha Nayak, Vishnu Vardhan Kasilya Sudarsan, Avinash Shrivastava, Balamukund Sripada
  • Patent number: 11967089
    Abstract: Embodiments of this application provide an object tracking method performed by a computer device. The method includes, when a target object is lost in a second image frame in a first subsequent image frames, determining, according to a first local feature and in second subsequent image frames starting with the second image frame, a third image frame in which the target object reappears after the target object is lost during the tracking; determining a location of a target object region in the third image frame including the target object; and continuing to track the target object in image frames according to the location of the target object region in the third image frame. Through the object tracking method, a lost object can be detected and repositioned by using an extracted first local feature of the target object, thereby effectively resolving the problem in the existing technical solution.
    Type: Grant
    Filed: June 1, 2021
    Date of Patent: April 23, 2024
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Yitong Wang, Jun Huang, Xing Ji
  • Patent number: 11967080
    Abstract: A system is provided for object localization in image data. The system includes an object localization framework comprising a plurality of object localization processes. The system is configured to receive an image comprising unannotated image data having at least one object in the image, access a first object localization process of the plurality of object localization processes, determine first bounding box information for the image using the first object localization process, wherein the first bounding box information comprises at least one first bounding box annotating at least a first portion of the at least one object in the image, and receive first feedback regarding the first bounding box information determined by the first object localization process. The system is further configured to persist the image with the first bounding box information or access a second object localization process based on the first feedback.
    Type: Grant
    Filed: May 10, 2021
    Date of Patent: April 23, 2024
    Assignee: Salesforce, Inc.
    Inventors: Joy Mustafi, Lakshya Kumar, Rajdeep Singh Dua
  • Patent number: 11967139
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for false detection removal using adversarial masks. The method includes performing object detection on a first image that includes a first region using a detection model determining the detection model incorrectly classified the first region of the first image; generating an adversarial mask based on the first region of the first image and the detection model; obtaining a second image that includes the first region; generating a masked image based on the second image and the adversarial mask; and performing object detection on the masked image including the first region using the detection model.
    Type: Grant
    Filed: May 13, 2021
    Date of Patent: April 23, 2024
    Assignee: ObjectVideo Labs, LLC
    Inventors: Eduardo Romera Carmena, Gang Qian, Allison Beach
  • Patent number: 11957412
    Abstract: An imaging system for an ophthalmic laser system includes a prism cone made of a transparent optical material and disposed downstream of the focusing objective lens of the ophthalmic laser system, the prism cone having an upper surface, a lower surface parallel to the upper surface, a tapered side surface between the upper and lower surfaces, and a beveled surface formed at an upper edge of the prism cone and intersecting the upper surface and the side surface, and a camera disposed adjacent to the prism cone and facing the beveled surface. The camera is disposed to directly receive light that enters the lower surface of the prism cone and exits the beveled surface without having been reflected by any surface.
    Type: Grant
    Filed: November 11, 2019
    Date of Patent: April 16, 2024
    Assignee: AMO Development, LLC
    Inventors: Zenon Witowski, Mohammad Saidur Rahaman, Daryl Wong
  • Patent number: 11960914
    Abstract: Provided are methods and systems for suggesting an enhanced multimodal interaction. The method for suggesting at least one modality of interaction, includes: identifying, by an electronic device, initiation of an interaction by a user with a first device using a first modality; detecting, by the electronic device, an intent of the user and a state of the user based on the identified initiated interaction; determining, by the electronic device, at least one of a second modality and at least one second device, to continue the initiated interaction, based on the detected intent of the user and the detected state of the user; and providing, by the electronic device, a suggestion to the user to continue the interaction with the first device using the determined second modality, by indicating the second modality on the first device or the at least one second device.
    Type: Grant
    Filed: March 28, 2023
    Date of Patent: April 16, 2024
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Praveen Kumar Guvvakallu Sivamoorthy, Mayank Kumar Tyagi, Navin N, Aravindh N, Sanofer H, Sudip Roy, Arjun Janardhanan Kappatan, Lalith Satya Vara Prasad Medeti, Vrajesh Navinchandra Sejpal, Saumitri Choudhury
  • Patent number: 11960998
    Abstract: Various embodiments herein each include at least one of systems, methods, software, and data structures for context-aided machine vision. For example, one method embodiment includes identifying a customer in a shopping area and maintaining an item bin in a computing system of data identifying items the customer has picked up for purchase. This method further includes receiving an image of the customer holding an item and performing item identification processing on the image to identify the item the customer is holding. The item identification processing may be performed based in part on a stored shopping history of the customer indicating items the customer is more likely to purchase. The identified item is then added to the item bin of the customer.
    Type: Grant
    Filed: March 15, 2023
    Date of Patent: April 16, 2024
    Assignee: NCR Voyix Corporation
    Inventors: Brent Vance Zucker, Adam Justin Lieberman
  • Patent number: 11961308
    Abstract: Systems and methods for detecting blockages in images are described. An example method may include receiving a plurality of images captured by a camera installed on an apparatus. The method may include identifying one or more candidate blocked regions in the plurality of images. Each of the candidate blocked regions may contain image data caused by blockages in the camera's field-of-view. The method may further include assigning scores to the one or more candidate blocked regions based on relationships among the one or more candidate blocked regions in the plurality of images. In response to a determination that one of the scores is above a predetermined blockage threshold, the method may include generating an alarm signal for the apparatus.
    Type: Grant
    Filed: February 2, 2023
    Date of Patent: April 16, 2024
    Assignee: NVIDIA CORPORATION
    Inventors: Xiaoyan Mu, Xiaohan Hu
  • Patent number: 11959216
    Abstract: Disclosed herein is a washing machine capable of identifying whether laundry of an inner tub includes waterproof clothing. The washing machine includes a cabinet provided with an opening at an upper portion thereof, an outer tub provided in the cabinet, an inner tub provided in the outer tub, a motor configured to rotate the inner tub, a camera configured to capture an image of an inside of the inner tub, and a controller configured to control the motor to increase a rotational speed of the inner tub to a first rotational speed during spinning. The controller is configured to control the motor to set the rotational speed of the inner tub to a second rotational speed, which is less than the first rotational speed, based on the image of the inside of the inner tub captured by the camera during the spinning.
    Type: Grant
    Filed: July 27, 2020
    Date of Patent: April 16, 2024
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Sungmo Lee, Junhyun Park, Seokmo Chang, Jeonghoon Kang
  • Patent number: 11960645
    Abstract: A method for disengaging a surgical instrument of a surgical robotic system comprising receiving a gaze input from an eye tracker; determining, by one or more processors, whether the gaze input indicates the gaze of the user is outside or inside of the display; in response to determining the gaze input indicates the gaze of the user is outside of the display, determining an amount of time the gaze of the user is outside of the display; in response to determining the gaze of the user is outside of the display for less than a maximum amount of time, pause the surgical robotic system from a teleoperation mode; and in response to determining the gaze of the user is outside of the display for more than the maximum amount of time, disengage the surgical robotic system from the teleoperation mode.
    Type: Grant
    Filed: November 16, 2021
    Date of Patent: April 16, 2024
    Assignee: Verb Surgical Inc.
    Inventors: Anette Lia Freiin von Kapri, Denise Ann Miller, Paolo Invernizzi, Joan Savall, John Magnasco
  • Patent number: 11960991
    Abstract: A computer-implemented method for training a classifier, particularly a binary classifier, for classifying input signals to optimize performance according to a non-decomposable metric that measures an alignment between classifications corresponding to input signals of a set of training data and corresponding predicted classifications of the input signals obtained from the classifier. The method includes providing weighting factors that characterize how the non-decomposable metric depends on a plurality of terms from a confusion matrix of the classifications and the predicted classifications, and training the classifier depending on the provided weighting factors.
    Type: Grant
    Filed: November 17, 2020
    Date of Patent: April 16, 2024
    Assignees: ROBERT BOSCH GMBH, CARNEGIE MELLON UNIVERSITY
    Inventors: Rizal Fathony, Frank Schmidt, Jeremy Zieg Kolter
  • Patent number: 11954916
    Abstract: An automated driving system includes an object detection system. A neural network image encoder generates image embeddings associated with an image including an object. A neural network text encoder generates concept embeddings associated with each of a plurality of concepts. Each of the plurality of concepts is associated with one of at least two object classes. A confidence score module generates a confidence score for each of the plurality of concepts based on the image embeddings and the concept embeddings associated with the concept. An object class prediction module generates a predicted object class of the object based on an association between a set of concepts of the plurality of concepts having at least two of the highest values of the generated confidence scores and the one of the at least two object classes associated with a majority of the set of concepts.
    Type: Grant
    Filed: February 7, 2022
    Date of Patent: April 9, 2024
    Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Wei Tong, Siddhartha Gupta, Jacob Alan Bond, Zhuoning Yuan
  • Patent number: 11954898
    Abstract: There is provided a learning method and a learning device for performing transfer learning on an object detector that has been trained to detect first object classes such that the object detector is able to detect second object classes. Further, a testing method and a testing device are provided to allow at least part of the first object classes and the second object classes to be detected by using the object detector having been trained through the transfer learning. Accordingly, a detection performance can be improved for the second object classes that cannot be detected through training data set corresponding to the first object classes.
    Type: Grant
    Filed: October 27, 2023
    Date of Patent: April 9, 2024
    Assignee: SUPERB AI CO., LTD.
    Inventor: Kye Hyeon Kim
  • Patent number: 11954804
    Abstract: Provided is an information processing device that enables a virtual object to be displayed at an appropriate position in the real world. The information processing device includes a position estimation unit that estimates a current position in a first coordinate system, a display position setting unit that sets a display position of a virtual object in a third coordinate system on the basis of an environment database, virtual object information including the display position of the virtual object in a second coordinate system, and an observed image captured near the current position, a meta information generation unit that generates observation meta information, and an environment database management unit that compares observation data with environment data of the environment database to determine whether to add the observation data to the environment database.
    Type: Grant
    Filed: August 14, 2019
    Date of Patent: April 9, 2024
    Assignee: SONY CORPORATION
    Inventors: Hajime Wakabayashi, Kuniaki Torii, Ryo Watanabe
  • Patent number: 11954805
    Abstract: In one embodiment, a method includes by one or more computing devices, accessing an image including a hand of a user of a head-mounted display at a first time. The method includes generating, from at least the image, a virtual object representation of the hand, defined in a virtual environment that includes at least one other virtual object. The method includes rendering a first image of the virtual environment comprising a first portion of the hand of the user at a first frame rate, and determining a second viewpoint of the user at a second time. The method includes rendering a second image of the virtual environment comprising a second portion of the hand of the user at a second frame rate. The method includes providing, to a set of light emitters of the head-mounted display, instructions to display the second image.
    Type: Grant
    Filed: December 29, 2022
    Date of Patent: April 9, 2024
    Assignee: META PLATFORMS TECHNOLOGIES, LLC
    Inventors: Steve John Clohset, Warren Andrew Hunt
  • Patent number: 11954537
    Abstract: Scaling an ordered event stream (OES) based on an information-unit (IU) metric is disclosed. The IU metric can correspond to an amount of computing resources that can be consumed to access information embodied in event data of an event of the OES. In this regard, the amount of computing resources to access the data of the stream event itself can be distinct from an amount of computing resources employed to access information embodied in the data. As such, where an external application, e.g., a reader, a writer, etc., can connect to an OES data storage system, enabling the OES to be scaled in response to burdening of computing resources accessing event information, rather than merely event data, can aid in preservation of an ordering of events accessed from the OES.
    Type: Grant
    Filed: April 22, 2021
    Date of Patent: April 9, 2024
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Mikhail Danilov, Yohannes Altaye
  • Patent number: 11948312
    Abstract: In order to minimize the impact of a delay (if any) that occurs when a process for detecting an object from a video takes time, and thereby achieve accurate tracking, the object detection/tracking device according to the present invention is provided with: an acquisition unit which acquires a video; a tracking unit which tracks an object in the video; a detection unit which detects an object in the video; an association unit which associates the same objects that have been detected and tracked in the same image in the video; and a correction unit which corrects the position of the tracked object using the position of the detected object, from among the associated objects.
    Type: Grant
    Filed: April 17, 2019
    Date of Patent: April 2, 2024
    Assignee: NEC CORPORATION
    Inventor: Ryoma Oami
  • Patent number: 11945276
    Abstract: The present disclosure discloses a preview vehicle height control system and a method of controlling the same. The system includes a monitoring device configured to detect the road surface condition of a driving path of a vehicle, an active suspension configured to adjust a vehicle height, a memory configured to store a plurality of data maps distinguished based on a type of bump, each data map having a vehicle dynamic characteristic as an input and a tuning factor as an output, and a controller configured to derive the tuning factor based on a data map, among the plurality of data maps of the memory, corresponding to the bump detected by the monitoring device, derive a target vehicle height in a form of a Gaussian distribution by substituting the tuning factor, and control the active suspension to follow the derived target vehicle height.
    Type: Grant
    Filed: August 19, 2022
    Date of Patent: April 2, 2024
    Assignees: HYUNDAI MOTOR COMPANY, KIA CORPORATION, FOUNDATION FOR RESEARCH AND BUSINESS SEOUL NATIONAL UNIVERSITY OF SCIENCE AND TECHNOLOGY
    Inventors: Youngil Sohn, Min Jun Kim, Sang Woo Hwang, Sehyun Chang, Jun Ho Seong, Yong Hwan Jeong, Seong Jin Yim
  • Patent number: 11948303
    Abstract: A method and apparatus of objective assessment of images captured from a human gastrointestinal (GI) tract are disclosed. According to this method, one or more images captured using an endoscope when the endoscope is inside the human gastrointestinal (GI) tract are received. Whether there is any specific target object is checked. When one or more specific target objects in the images are detected: areas of the specific target objects in the images are determined; an objective assessment score is derived based on the areas of the specific target objects in a substantial number of images from the images; where the step of detecting the specific target objects is performed using an artificial intelligence process.
    Type: Grant
    Filed: June 19, 2021
    Date of Patent: April 2, 2024
    Assignee: CapsoVision Inc.
    Inventors: Kang-Huai Wang, Chenyu Wu, Gordon C. Wilson
  • Patent number: 11948400
    Abstract: An action detection method based on a human skeleton feature and a storage medium belong to the field of computer vision, and the method includes: for each person, extracting a series of body keypoints in every frame of the video as the human skeleton feature; calculating a body structure center point and approximating rigid motion area by using the human skeleton feature as a calculated value from the skeleton feature state, and predicting an estimated value in the next frame; performing target matching according to the estimated and calculated value, correlating the human skeleton feature belonging to the same target to obtain a skeleton feature sequence, and then correlating features of each keypoint in the temporal domain to obtain a spatial-temporal domain skeleton feature; inputting the skeleton feature into an action detection model to obtain an action category. In the disclosure, the accuracy of action detection is improved.
    Type: Grant
    Filed: June 30, 2023
    Date of Patent: April 2, 2024
    Assignee: HUAZHONG UNIVERSITY OF SCIENCE AND TECHNOLOGY
    Inventors: Li Yu, Han Yu