Target Tracking Or Detecting Patents (Class 382/103)
  • Patent number: 11967122
    Abstract: Systems, methods, and computer-readable media are disclosed for context aware verification for sensor pipelines. Autonomous vehicles (AVs) may include an extensive number of sensors to provide sufficient situational awareness to perception and control systems of the AV. For those systems to operate reliably, the data coming from the different sensors should be checked for integrity. To this end, the systems and methods described herein may use contextual clues to ensure that the data coming from the different the sensors is reliable.
    Type: Grant
    Filed: April 19, 2021
    Date of Patent: April 23, 2024
    Assignee: ARGO AI, LLC
    Inventors: Michel H. J. Laverne, Dane P. Bennington
  • Patent number: 11967268
    Abstract: Imaging systems and techniques are described. An imaging system causes a display to display light according to a predefined pattern. The imaging system receives image data of a scene from an image sensor. The image data is captured using the image sensor while the display is configured to display the light according to the predefined pattern. The imaging system processes the image data to generate at least one image frame of the scene based on detection of the predefined pattern in the image data. The imaging system outputs the at least one image frame of the scene.
    Type: Grant
    Filed: November 7, 2022
    Date of Patent: April 23, 2024
    Assignee: QUALCOMM Incorporated
    Inventors: Nikhil Verma, Prakasha Nayak, Vishnu Vardhan Kasilya Sudarsan, Avinash Shrivastava, Balamukund Sripada
  • Patent number: 11967092
    Abstract: A system and method for detection-guided tracking of human-dynamics is provided. The system receives an input human-dynamics sequence including geometry information and an RGB video of a human object. The system inputs the RGB video to the neural network and estimates a pose of the human object in each frame of the RGB video based on output of the neural network for the input. The system selects, from the input human-dynamics sequence, a key-frame for which the estimated pose is closest to a reference human pose. From the selected key-frame and up to a number of frames of the input human-dynamics sequence, the system generates a tracking sequence for a 3D human mesh of the human object. The generated tracking sequence includes final values of parameters of articulate motion and non-rigid motion of the 3D human mesh. Based on the generated tracking sequence, the system generates a free-viewpoint video.
    Type: Grant
    Filed: November 26, 2019
    Date of Patent: April 23, 2024
    Assignee: SONY GROUP CORPORATION
    Inventor: Qing Zhang
  • Patent number: 11967080
    Abstract: A system is provided for object localization in image data. The system includes an object localization framework comprising a plurality of object localization processes. The system is configured to receive an image comprising unannotated image data having at least one object in the image, access a first object localization process of the plurality of object localization processes, determine first bounding box information for the image using the first object localization process, wherein the first bounding box information comprises at least one first bounding box annotating at least a first portion of the at least one object in the image, and receive first feedback regarding the first bounding box information determined by the first object localization process. The system is further configured to persist the image with the first bounding box information or access a second object localization process based on the first feedback.
    Type: Grant
    Filed: May 10, 2021
    Date of Patent: April 23, 2024
    Assignee: Salesforce, Inc.
    Inventors: Joy Mustafi, Lakshya Kumar, Rajdeep Singh Dua
  • Patent number: 11967086
    Abstract: A method for trajectory generation based on player tracking is described herein. The method includes determining a temporal association for a first player in a captured field of view and determining a spatial association for the first player. The method also includes deriving a global player identification based on the temporal association and the spatial association and generating a trajectory based on the global player identification.
    Type: Grant
    Filed: July 31, 2019
    Date of Patent: April 23, 2024
    Assignee: INTEL CORPORATION
    Inventors: Yikai Fang, Qiang Li, Wenlong Li, Chenning Liu, Chen Ling, Hongzhi Tao, Yumeng Wang, Hang Zheng
  • Patent number: 11967089
    Abstract: Embodiments of this application provide an object tracking method performed by a computer device. The method includes, when a target object is lost in a second image frame in a first subsequent image frames, determining, according to a first local feature and in second subsequent image frames starting with the second image frame, a third image frame in which the target object reappears after the target object is lost during the tracking; determining a location of a target object region in the third image frame including the target object; and continuing to track the target object in image frames according to the location of the target object region in the third image frame. Through the object tracking method, a lost object can be detected and repositioned by using an extracted first local feature of the target object, thereby effectively resolving the problem in the existing technical solution.
    Type: Grant
    Filed: June 1, 2021
    Date of Patent: April 23, 2024
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Yitong Wang, Jun Huang, Xing Ji
  • Patent number: 11966047
    Abstract: The present disclosure generally relates to the field of eye tracking systems. An eye tracking system is provided. The eye tracking system comprises an illuminator arrangement, including at least one light source, configured to illuminate an eye of a user. The eye tracking system is configured to enable a reduction of reflections from an optic arrangement (e.g., a pair of glasses) that is located in a light beam path between the illuminator arrangement and the eye when the eye tracking system is in use. The illuminator arrangement is configured to emit p-polarized light to be incident on a surface of the optic arrangement at an angle corresponding to, or substantially corresponding to, Brewster's angle.
    Type: Grant
    Filed: September 28, 2020
    Date of Patent: April 23, 2024
    Assignee: Tobii AB
    Inventor: Magnus Arvidsson
  • Patent number: 11967139
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for false detection removal using adversarial masks. The method includes performing object detection on a first image that includes a first region using a detection model determining the detection model incorrectly classified the first region of the first image; generating an adversarial mask based on the first region of the first image and the detection model; obtaining a second image that includes the first region; generating a masked image based on the second image and the adversarial mask; and performing object detection on the masked image including the first region using the detection model.
    Type: Grant
    Filed: May 13, 2021
    Date of Patent: April 23, 2024
    Assignee: ObjectVideo Labs, LLC
    Inventors: Eduardo Romera Carmena, Gang Qian, Allison Beach
  • Patent number: 11965967
    Abstract: A computer implemented scheme for a light detection and ranging (LIDAR) system where point cloud feature extraction and segmentation by efficiently is achieved by: (1) data structuring; (2) edge detection; and (3) region growing.
    Type: Grant
    Filed: November 16, 2022
    Date of Patent: April 23, 2024
    Assignee: Oregon State University
    Inventors: Erzhuo Che, Michael Olsen
  • Patent number: 11964653
    Abstract: A driving assistance system includes a processor and a memory that stores surroundings information indicating the surroundings of a vehicle detected by sensors mounted on the vehicle. The processor is configured to acquire the position of a target in front of the vehicle and the position of the boundary of a roadway area in front of the vehicle based on the surroundings information. The processor is configured to determine whether the target is in the roadway area based on the position of the target and the position of the boundary. The processor is configured to calculate the distance between the target and the boundary when the target is in the roadway area. The processor is configured to determine whether the target is crossing the roadway area based on the relationship between the distance and a time.
    Type: Grant
    Filed: September 24, 2021
    Date of Patent: April 23, 2024
    Assignee: TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventor: Wataru Sasagawa
  • Patent number: 11967104
    Abstract: A method of determining the location of an object in a camera field of view is disclosed. The method may utilize a camera system that includes a digital video camera and a computer that runs image analysis software for analyzing images received from the camera. The image analysis software is configured to isolate pixel groups in each image. The method includes the steps of locating an object in the camera's field of view in which the object is represented by a set of pixels; identifying the centroid of the set of pixels representing the object; and calculating the location of the object relative to the camera based upon known parameters including the focal plane and camera lens location.
    Type: Grant
    Filed: November 29, 2021
    Date of Patent: April 23, 2024
    Assignee: United States of America as represented by the Secretary of the Air Force
    Inventors: Anthony Ligouri, Hayk Azatyan, William Erwin, Travis Rennich, David Shald, Adam Warren
  • Patent number: 11966838
    Abstract: In various examples, a machine learning model—such as a deep neural network (DNN)—may be trained to use image data and/or other sensor data as inputs to generate two-dimensional or three-dimensional trajectory points in world space, a vehicle orientation, and/or a vehicle state. For example, sensor data that represents orientation, steering information, and/or speed of a vehicle may be collected and used to automatically generate a trajectory for use as ground truth data for training the DNN. Once deployed, the trajectory points, the vehicle orientation, and/or the vehicle state may be used by a control component (e.g., a vehicle controller) for controlling the vehicle through a physical environment. For example, the control component may use these outputs of the DNN to determine a control profile (e.g., steering, decelerating, and/or accelerating) specific to the vehicle for controlling the vehicle through the physical environment.
    Type: Grant
    Filed: May 10, 2019
    Date of Patent: April 23, 2024
    Assignee: NVIDIA Corporation
    Inventors: Urs Muller, Mariusz Bojarski, Chenyi Chen, Bernhard Firner
  • Patent number: 11966517
    Abstract: A human interface including steps of presenting an image, then receiving a gesture from the user. The image is analyzed to identify the elements of the image and then compared to known images and then either soliciting an input from the user or displaying a menu to the user. Comparing the image and/or graphical image elements may be effectuated using a trained artificial intelligence engine or, in some embodiments, with a structured data source, said data source including predetermined images and menu options. If the input from the user is known, then presenting a predetermined menu. If the image is not known, then presenting an image or other menu options, and soliciting from the user the desired options. Once the user selections an option, the resulting selection may be used to further train the AI system or added to the structured data source for future reference.
    Type: Grant
    Filed: July 26, 2022
    Date of Patent: April 23, 2024
    Inventor: Richard Terrell
  • Patent number: 11967105
    Abstract: A system and method are provided. The method comprises obtaining a camera live stream from a camera in a user device, the camera live stream including image data of a particular product; determining one or more image features common to images of one or more products based at least on image analysis of image data of the images of the one or more products; comparing the one or more image features to one or more image features of the image data of the particular product to generate one or more potential adjustments to the one or more image features of the image data of the particular product; and providing, for presentation together with the camera live stream on the user device, at least one indication based on the one or more potential adjustments to the one or more image features of the image data of the particular product.
    Type: Grant
    Filed: February 6, 2023
    Date of Patent: April 23, 2024
    Assignee: Shopify Inc.
    Inventors: Benjamin Lui, Guduru Sai Nihas, Salim Batlouni
  • Patent number: 11957412
    Abstract: An imaging system for an ophthalmic laser system includes a prism cone made of a transparent optical material and disposed downstream of the focusing objective lens of the ophthalmic laser system, the prism cone having an upper surface, a lower surface parallel to the upper surface, a tapered side surface between the upper and lower surfaces, and a beveled surface formed at an upper edge of the prism cone and intersecting the upper surface and the side surface, and a camera disposed adjacent to the prism cone and facing the beveled surface. The camera is disposed to directly receive light that enters the lower surface of the prism cone and exits the beveled surface without having been reflected by any surface.
    Type: Grant
    Filed: November 11, 2019
    Date of Patent: April 16, 2024
    Assignee: AMO Development, LLC
    Inventors: Zenon Witowski, Mohammad Saidur Rahaman, Daryl Wong
  • Patent number: 11960645
    Abstract: A method for disengaging a surgical instrument of a surgical robotic system comprising receiving a gaze input from an eye tracker; determining, by one or more processors, whether the gaze input indicates the gaze of the user is outside or inside of the display; in response to determining the gaze input indicates the gaze of the user is outside of the display, determining an amount of time the gaze of the user is outside of the display; in response to determining the gaze of the user is outside of the display for less than a maximum amount of time, pause the surgical robotic system from a teleoperation mode; and in response to determining the gaze of the user is outside of the display for more than the maximum amount of time, disengage the surgical robotic system from the teleoperation mode.
    Type: Grant
    Filed: November 16, 2021
    Date of Patent: April 16, 2024
    Assignee: Verb Surgical Inc.
    Inventors: Anette Lia Freiin von Kapri, Denise Ann Miller, Paolo Invernizzi, Joan Savall, John Magnasco
  • Patent number: 11959216
    Abstract: Disclosed herein is a washing machine capable of identifying whether laundry of an inner tub includes waterproof clothing. The washing machine includes a cabinet provided with an opening at an upper portion thereof, an outer tub provided in the cabinet, an inner tub provided in the outer tub, a motor configured to rotate the inner tub, a camera configured to capture an image of an inside of the inner tub, and a controller configured to control the motor to increase a rotational speed of the inner tub to a first rotational speed during spinning. The controller is configured to control the motor to set the rotational speed of the inner tub to a second rotational speed, which is less than the first rotational speed, based on the image of the inside of the inner tub captured by the camera during the spinning.
    Type: Grant
    Filed: July 27, 2020
    Date of Patent: April 16, 2024
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Sungmo Lee, Junhyun Park, Seokmo Chang, Jeonghoon Kang
  • Patent number: 11960991
    Abstract: A computer-implemented method for training a classifier, particularly a binary classifier, for classifying input signals to optimize performance according to a non-decomposable metric that measures an alignment between classifications corresponding to input signals of a set of training data and corresponding predicted classifications of the input signals obtained from the classifier. The method includes providing weighting factors that characterize how the non-decomposable metric depends on a plurality of terms from a confusion matrix of the classifications and the predicted classifications, and training the classifier depending on the provided weighting factors.
    Type: Grant
    Filed: November 17, 2020
    Date of Patent: April 16, 2024
    Assignees: ROBERT BOSCH GMBH, CARNEGIE MELLON UNIVERSITY
    Inventors: Rizal Fathony, Frank Schmidt, Jeremy Zieg Kolter
  • Patent number: 11960998
    Abstract: Various embodiments herein each include at least one of systems, methods, software, and data structures for context-aided machine vision. For example, one method embodiment includes identifying a customer in a shopping area and maintaining an item bin in a computing system of data identifying items the customer has picked up for purchase. This method further includes receiving an image of the customer holding an item and performing item identification processing on the image to identify the item the customer is holding. The item identification processing may be performed based in part on a stored shopping history of the customer indicating items the customer is more likely to purchase. The identified item is then added to the item bin of the customer.
    Type: Grant
    Filed: March 15, 2023
    Date of Patent: April 16, 2024
    Assignee: NCR Voyix Corporation
    Inventors: Brent Vance Zucker, Adam Justin Lieberman
  • Patent number: 11960914
    Abstract: Provided are methods and systems for suggesting an enhanced multimodal interaction. The method for suggesting at least one modality of interaction, includes: identifying, by an electronic device, initiation of an interaction by a user with a first device using a first modality; detecting, by the electronic device, an intent of the user and a state of the user based on the identified initiated interaction; determining, by the electronic device, at least one of a second modality and at least one second device, to continue the initiated interaction, based on the detected intent of the user and the detected state of the user; and providing, by the electronic device, a suggestion to the user to continue the interaction with the first device using the determined second modality, by indicating the second modality on the first device or the at least one second device.
    Type: Grant
    Filed: March 28, 2023
    Date of Patent: April 16, 2024
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Praveen Kumar Guvvakallu Sivamoorthy, Mayank Kumar Tyagi, Navin N, Aravindh N, Sanofer H, Sudip Roy, Arjun Janardhanan Kappatan, Lalith Satya Vara Prasad Medeti, Vrajesh Navinchandra Sejpal, Saumitri Choudhury
  • Patent number: 11961308
    Abstract: Systems and methods for detecting blockages in images are described. An example method may include receiving a plurality of images captured by a camera installed on an apparatus. The method may include identifying one or more candidate blocked regions in the plurality of images. Each of the candidate blocked regions may contain image data caused by blockages in the camera's field-of-view. The method may further include assigning scores to the one or more candidate blocked regions based on relationships among the one or more candidate blocked regions in the plurality of images. In response to a determination that one of the scores is above a predetermined blockage threshold, the method may include generating an alarm signal for the apparatus.
    Type: Grant
    Filed: February 2, 2023
    Date of Patent: April 16, 2024
    Assignee: NVIDIA CORPORATION
    Inventors: Xiaoyan Mu, Xiaohan Hu
  • Patent number: 11954537
    Abstract: Scaling an ordered event stream (OES) based on an information-unit (IU) metric is disclosed. The IU metric can correspond to an amount of computing resources that can be consumed to access information embodied in event data of an event of the OES. In this regard, the amount of computing resources to access the data of the stream event itself can be distinct from an amount of computing resources employed to access information embodied in the data. As such, where an external application, e.g., a reader, a writer, etc., can connect to an OES data storage system, enabling the OES to be scaled in response to burdening of computing resources accessing event information, rather than merely event data, can aid in preservation of an ordering of events accessed from the OES.
    Type: Grant
    Filed: April 22, 2021
    Date of Patent: April 9, 2024
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Mikhail Danilov, Yohannes Altaye
  • Patent number: 11954804
    Abstract: Provided is an information processing device that enables a virtual object to be displayed at an appropriate position in the real world. The information processing device includes a position estimation unit that estimates a current position in a first coordinate system, a display position setting unit that sets a display position of a virtual object in a third coordinate system on the basis of an environment database, virtual object information including the display position of the virtual object in a second coordinate system, and an observed image captured near the current position, a meta information generation unit that generates observation meta information, and an environment database management unit that compares observation data with environment data of the environment database to determine whether to add the observation data to the environment database.
    Type: Grant
    Filed: August 14, 2019
    Date of Patent: April 9, 2024
    Assignee: SONY CORPORATION
    Inventors: Hajime Wakabayashi, Kuniaki Torii, Ryo Watanabe
  • Patent number: 11954898
    Abstract: There is provided a learning method and a learning device for performing transfer learning on an object detector that has been trained to detect first object classes such that the object detector is able to detect second object classes. Further, a testing method and a testing device are provided to allow at least part of the first object classes and the second object classes to be detected by using the object detector having been trained through the transfer learning. Accordingly, a detection performance can be improved for the second object classes that cannot be detected through training data set corresponding to the first object classes.
    Type: Grant
    Filed: October 27, 2023
    Date of Patent: April 9, 2024
    Assignee: SUPERB AI CO., LTD.
    Inventor: Kye Hyeon Kim
  • Patent number: 11954916
    Abstract: An automated driving system includes an object detection system. A neural network image encoder generates image embeddings associated with an image including an object. A neural network text encoder generates concept embeddings associated with each of a plurality of concepts. Each of the plurality of concepts is associated with one of at least two object classes. A confidence score module generates a confidence score for each of the plurality of concepts based on the image embeddings and the concept embeddings associated with the concept. An object class prediction module generates a predicted object class of the object based on an association between a set of concepts of the plurality of concepts having at least two of the highest values of the generated confidence scores and the one of the at least two object classes associated with a majority of the set of concepts.
    Type: Grant
    Filed: February 7, 2022
    Date of Patent: April 9, 2024
    Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Wei Tong, Siddhartha Gupta, Jacob Alan Bond, Zhuoning Yuan
  • Patent number: 11954805
    Abstract: In one embodiment, a method includes by one or more computing devices, accessing an image including a hand of a user of a head-mounted display at a first time. The method includes generating, from at least the image, a virtual object representation of the hand, defined in a virtual environment that includes at least one other virtual object. The method includes rendering a first image of the virtual environment comprising a first portion of the hand of the user at a first frame rate, and determining a second viewpoint of the user at a second time. The method includes rendering a second image of the virtual environment comprising a second portion of the hand of the user at a second frame rate. The method includes providing, to a set of light emitters of the head-mounted display, instructions to display the second image.
    Type: Grant
    Filed: December 29, 2022
    Date of Patent: April 9, 2024
    Assignee: META PLATFORMS TECHNOLOGIES, LLC
    Inventors: Steve John Clohset, Warren Andrew Hunt
  • Patent number: 11948332
    Abstract: A system for determining the gaze endpoint of a subject, the system comprising: a eye tracking unit adapted to determine the gaze direction of one or more eyes of the subject; a head tracking unit adapted to determine the position comprising location and orientation of the eye tracker with respect to a reference coordinate system; a 3D Structure representation unit, that uses the 3D structure and position of objects of the scene in the reference coordinate system to provide a 3D structure representation of the scene; based on the gaze direction, the eye tracker position and the 3D structure representation, calculating the gaze endpoint on an object of the 3D structure representation of the scene or determining the object itself.
    Type: Grant
    Filed: February 17, 2023
    Date of Patent: April 2, 2024
    Assignee: APPLE INC.
    Inventors: Jan Hoffmann, Tom Sengelaub, Denis Williams
  • Patent number: 11945276
    Abstract: The present disclosure discloses a preview vehicle height control system and a method of controlling the same. The system includes a monitoring device configured to detect the road surface condition of a driving path of a vehicle, an active suspension configured to adjust a vehicle height, a memory configured to store a plurality of data maps distinguished based on a type of bump, each data map having a vehicle dynamic characteristic as an input and a tuning factor as an output, and a controller configured to derive the tuning factor based on a data map, among the plurality of data maps of the memory, corresponding to the bump detected by the monitoring device, derive a target vehicle height in a form of a Gaussian distribution by substituting the tuning factor, and control the active suspension to follow the derived target vehicle height.
    Type: Grant
    Filed: August 19, 2022
    Date of Patent: April 2, 2024
    Assignees: HYUNDAI MOTOR COMPANY, KIA CORPORATION, FOUNDATION FOR RESEARCH AND BUSINESS SEOUL NATIONAL UNIVERSITY OF SCIENCE AND TECHNOLOGY
    Inventors: Youngil Sohn, Min Jun Kim, Sang Woo Hwang, Sehyun Chang, Jun Ho Seong, Yong Hwan Jeong, Seong Jin Yim
  • Patent number: 11949628
    Abstract: Technologies to improve wireless communications by on-body products are described. One device includes millimeter wave (mmWave) frequency front-end circuitry and a baseband processor with an Orthogonal Frequency Division Multiplexing (OFDM) physical (PHY) layer. The baseband processor determines received signal strength indicator (RSSI) value and phase value associated with a wireless channel in a mmWave frequency range. The baseband processor determines a state of motion of the device using the RSSI value and the phase value. The baseband processor sends data to the second device using a first subcarrier structure of the OFDM PHY layer, in response to the state of motion being a first state of motion. The baseband processor sends data to the second device using a second subcarrier structure of the OFDM physical layer, in response to the state motion being a second state of motion having more motion than the first state of motion.
    Type: Grant
    Filed: December 6, 2021
    Date of Patent: April 2, 2024
    Assignee: Amazon Technologies, Inc.
    Inventors: Cyril Arokiaraj Arool Emmanuel, Balamurugan Shanmugam
  • Patent number: 11948400
    Abstract: An action detection method based on a human skeleton feature and a storage medium belong to the field of computer vision, and the method includes: for each person, extracting a series of body keypoints in every frame of the video as the human skeleton feature; calculating a body structure center point and approximating rigid motion area by using the human skeleton feature as a calculated value from the skeleton feature state, and predicting an estimated value in the next frame; performing target matching according to the estimated and calculated value, correlating the human skeleton feature belonging to the same target to obtain a skeleton feature sequence, and then correlating features of each keypoint in the temporal domain to obtain a spatial-temporal domain skeleton feature; inputting the skeleton feature into an action detection model to obtain an action category. In the disclosure, the accuracy of action detection is improved.
    Type: Grant
    Filed: June 30, 2023
    Date of Patent: April 2, 2024
    Assignee: HUAZHONG UNIVERSITY OF SCIENCE AND TECHNOLOGY
    Inventors: Li Yu, Han Yu
  • Patent number: 11947781
    Abstract: Embodiments of this application provide an interface generation method and a device, where the method is applied to a device having a development function, and may provide a method for automatically adjusting a layout of a visual element on a to-be-generated interface to quickly generate an interface. The method includes: The device obtains a visual element of a reference interface, and obtains configuration information of a display of a target terminal device (501). The device determines a visual focus of the visual element based on attribute information of the visual element (502). The device determines, based on the configuration information of the display, an interface layout template corresponding to the configuration information (503). Finally, the device adjusts, based on the visual focus and the interface layout template, a layout of the visual element on a to-be-generated interface, and generates an interface (504).
    Type: Grant
    Filed: July 23, 2020
    Date of Patent: April 2, 2024
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventor: Zhang Gao
  • Patent number: 11948313
    Abstract: A method of tracking subjects in an area. The method including receiving a plurality of sequences of images of corresponding fields of view in the area of real space, using a plurality of trained inference engines that process respective sequences of images in the plurality of sequences of images to locate features of subjects in the corresponding fields of view of the respective sequences, combining the located features from more than one of the trained inference engines which process respective sequences of images having overlapping fields of view to generate data locating subjects in three dimensions in the area of real space during identification intervals, and matching located subjects from a plurality of identification intervals to identify tracked subjects, including comparing located subjects with tracked subjects.
    Type: Grant
    Filed: January 10, 2022
    Date of Patent: April 2, 2024
    Inventor: Jordan E. Fisher
  • Patent number: 11948238
    Abstract: Embodiments relate to a method for real-time facial animation, and a processing device for real-time facial animation. The method includes providing a dynamic expression model, receiving tracking data corresponding to a facial expression of a user, estimating tracking parameters based on the dynamic expression model and the tracking data, and refining the dynamic expression model based on the tracking data and estimated tracking parameters. The method may further include generating a graphical representation corresponding to the facial expression of the user based on the tracking parameters. Embodiments pertain to a real-time facial animation system.
    Type: Grant
    Filed: May 27, 2022
    Date of Patent: April 2, 2024
    Assignee: Apple Inc.
    Inventors: Sofien Bouaziz, Mark Pauly
  • Patent number: 11948293
    Abstract: A position of an object is determined by optically capturing at least one capture structure arranged at the object or at a reference object captured from the object and thereby obtaining capture information, the at least one capture structure having a point-symmetrical profile of an optical property that varies along a surface of the capture structure, transforming a location-dependent mathematical function corresponding to the point-symmetrical profile of the optical property into a frequency domain, forming a second frequency-dependent mathematical function from a first frequency-dependent mathematical function, wherein the second mathematical function is formed from a relationship of in each case a real part and an imaginary part of complex function values of the first frequency-dependent mathematical function, and forming at least one function value of the second frequency-dependent mathematical function and determining the same as location information about a location of a point of symmetry of the locati
    Type: Grant
    Filed: January 31, 2021
    Date of Patent: April 2, 2024
    Assignee: Carl Zeiss Industrielle Messtechnik GmbH
    Inventor: Wolfgang Hoegele
  • Patent number: 11948312
    Abstract: In order to minimize the impact of a delay (if any) that occurs when a process for detecting an object from a video takes time, and thereby achieve accurate tracking, the object detection/tracking device according to the present invention is provided with: an acquisition unit which acquires a video; a tracking unit which tracks an object in the video; a detection unit which detects an object in the video; an association unit which associates the same objects that have been detected and tracked in the same image in the video; and a correction unit which corrects the position of the tracked object using the position of the detected object, from among the associated objects.
    Type: Grant
    Filed: April 17, 2019
    Date of Patent: April 2, 2024
    Assignee: NEC CORPORATION
    Inventor: Ryoma Oami
  • Patent number: 11948303
    Abstract: A method and apparatus of objective assessment of images captured from a human gastrointestinal (GI) tract are disclosed. According to this method, one or more images captured using an endoscope when the endoscope is inside the human gastrointestinal (GI) tract are received. Whether there is any specific target object is checked. When one or more specific target objects in the images are detected: areas of the specific target objects in the images are determined; an objective assessment score is derived based on the areas of the specific target objects in a substantial number of images from the images; where the step of detecting the specific target objects is performed using an artificial intelligence process.
    Type: Grant
    Filed: June 19, 2021
    Date of Patent: April 2, 2024
    Assignee: CapsoVision Inc.
    Inventors: Kang-Huai Wang, Chenyu Wu, Gordon C. Wilson
  • Patent number: 11937883
    Abstract: Various embodiments of the present disclosure encompass a visual endoscopic guidance device employing an endoscopic viewing controller (20) for controlling a display of an endoscopic view (11) of an anatomical structure, and a visual guidance controller (130) for controlling of a display one or more guided manipulation anchors (50-52) within the display of the endoscopic view (11) of the anatomical structure. A guided manipulation anchor (50-52) is representative of location marking and/or a motion directive of a guided manipulation of the anatomical structure. The visual guidance controller (130) further controls a display of a hidden feature anchor relative to the display of the endoscopic view (11) of the anatomical structure. The hidden feature anchor (53) being representative of a position (e.g., a location and/or an orientation) of a guided visualization of the hidden feature of the anatomical structure.
    Type: Grant
    Filed: December 9, 2020
    Date of Patent: March 26, 2024
    Assignee: KONINKLIJKE PHILIPS N.V.
    Inventors: Paul Thienphrapa, Torre Michelle Bydlon, Prasad Vagdargi, Sean Joseph Kyne, Aleksandra Popovic
  • Patent number: 11941893
    Abstract: A virtual traffic line generation apparatus and a method thereof are provided. The virtual traffic line generation apparatus includes a controller that determines reliability of a traffic line detected for each frame and generates a virtual traffic line based on a traffic line with the highest reliability among traffic lines detected in a previous frame when the traffic line is not detected and a storage storing the reliability of the traffic line for each frame.
    Type: Grant
    Filed: June 16, 2022
    Date of Patent: March 26, 2024
    Assignees: Hyundai Motor Company, Kia Corporation
    Inventor: Gi Won Park
  • Patent number: 11938971
    Abstract: In a vehicle control device for an autonomous driving vehicle that autonomously travels based on an operation command, a gesture image of a person around the autonomous driving vehicle is acquired, and a stored reference gesture image is collated with the acquired gesture image. At this time, when it is discriminated that the gesture of the person around the autonomous driving vehicle is a gesture requesting the autonomous driving vehicle to stop, it is determined whether a disaster has occurred. When it is determined that the disaster has occurred, the autonomous driving vehicle is caused to stop around the person requesting the autonomous driving vehicle to stop.
    Type: Grant
    Filed: January 12, 2022
    Date of Patent: March 26, 2024
    Assignee: TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Hiromitsu Kobayashi, Taizo Masuda, Yuta Kataoka, Miki Nomoto, Yoshiki Ueda, Satoshi Omi, Yuki Nishikawa
  • Patent number: 11941954
    Abstract: Images captured for components of a device are monitored for changes by evaluating a first region of interest in the images. Periodically, a command is sent to the device to move one or more of the components to a known position or state. A certain component or set of components associated with being moved based on the command is evaluated in a second region of interest in the images to determine if the corresponding component or set of components is in the known position or state within the images. When the corresponding component or set of components is not identified from the images in the known position or state, a security alert is raised for the device and security operations are processed on the host device.
    Type: Grant
    Filed: January 31, 2023
    Date of Patent: March 26, 2024
    Assignee: NCR Corporation
    Inventors: Alexander William Whytock, Conor Michael Fyfe
  • Patent number: 11941822
    Abstract: Systems and techniques are described herein for performing optical flow estimation for one or more frames. For example, a process can include determining an optical flow prediction associated with a plurality of frames. The process can include determining a position of at least one feature associated with a first frame and determining, based on the position of the at least one feature in the first frame and the optical flow prediction, a position estimate of a search area for searching for the at least one feature in a second frame. The process can include determining, from within the search area, a position of the at least one feature in the second frame.
    Type: Grant
    Filed: March 8, 2023
    Date of Patent: March 26, 2024
    Assignee: QUALCOMM Incorporated
    Inventors: Jamie Menjay Lin, Fatih Murat Porikli
  • Patent number: 11941906
    Abstract: Provided is a method of identifying a hand of a genuine user wearing a wearable device. According to an embodiment, the method includes using a sensor included in the wearable device to recognize a hand located in a detection area of the sensor; estimating a position of a shoulder connected to the recognized hand based on a positional relation between the orientation of the recognized hand and at least one body part connected to the recognized hand; and using information about a probability of a shoulder of the genuine user being present in the estimated position to determine whether the recognized hand is a hand of the genuine user.
    Type: Grant
    Filed: June 10, 2019
    Date of Patent: March 26, 2024
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Bonkon Koo, Jaewoo Ko
  • Patent number: 11941841
    Abstract: A computer-implemented method according to one embodiment includes running an initial network on a plurality of images to detect actors pictured therein and body joints of the detected actors. The method further includes running fully-connected networks in parallel, one fully-connected network for each of the detected actors, to reconstruct complete three-dimensional poses of the actors. Sequential model fitting is performed on the plurality of images. The sequential model fitting is based on results of running the initial network and the fully-connected networks. The method further includes determining, based on the sequential model fitting, a locational position for a camera in which the camera has a view of a possible point of collision of two or more of the actors. The camera is instructed to be positioned in the locational position.
    Type: Grant
    Filed: September 22, 2021
    Date of Patent: March 26, 2024
    Assignee: International Business Machines Corporation
    Inventors: Yu-Siang Chen, Ching-Chun Liu, Ryan Young, Ting-Chieh Yu
  • Patent number: 11941818
    Abstract: Various implementations disclosed herein include devices, systems, and methods that determine a 3D location of an edge based on image and depth data. This involves determining a 2D location of a line segment corresponding to an edge of an object based on a light-intensity image, determining a 3D location of a plane based on depth values (e.g., based on sampling depth near the edge/on both sides of the edge and fitting a plane to the sampled points), and determining a 3D location of the line segment based on the plane (e.g., by projecting the line segment onto the plane). The devices, systems, and methods may involve classifying an edge as a particular edge type (e.g., fold, cliff, plane) and detecting the edge based on such classification.
    Type: Grant
    Filed: March 3, 2021
    Date of Patent: March 26, 2024
    Assignee: Apple Inc.
    Inventors: Vedant Saran, Alexandre Da Veiga
  • Patent number: 11941875
    Abstract: Methods, computer systems, and apparatus, including computer programs encoded on computer storage media, for processing a perspective view range image generated from sensor measurements of an environment. The perspective view range image includes a plurality of pixels arranged in a two-dimensional grid and including, for each pixel, (i) features of one or more sensor measurements at a location in the environment corresponding to the pixel and (ii) geometry information comprising range features characterizing a range of the location in the environment corresponding to the pixel relative to the one or more sensors. The system processes the perspective view range image using a first neural network to generate an output feature representation. The first neural network comprises a first perspective point-set aggregation layer comprising a geometry-dependent kernel.
    Type: Grant
    Filed: July 27, 2021
    Date of Patent: March 26, 2024
    Assignee: Waymo LLC
    Inventors: Yuning Chai, Pei Sun, Jiquan Ngiam, Weiyue Wang, Vijay Vasudevan, Benjamin James Caine, Xiao Zhang, Dragomir Anguelov
  • Patent number: 11941648
    Abstract: The disclosure includes implementations for providing a recommendation to a driver of a second DSRC-equipped vehicle. The recommendation may describe an estimate of how long it would take the second DSRC-equipped vehicle to receive a roadside service from a drive-through business. A method according to some implementations may include receiving, by the second DSRC-equipped vehicle, a Dedicated Short Range Communication message (“DSRC message”) that includes path history data. The path history data may describe a path of a first DSRC-equipped vehicle over a plurality of different times while the first DSRC-equipped vehicle is located in a queue of the drive-through business. The method may include determining delay time data for the second DSRC-equipped vehicle based on the path history data for the first DSRC-equipped vehicle. The delay time data may describe the estimate. The method may include providing the recommendation to the driver. The recommendation may include the estimate.
    Type: Grant
    Filed: March 7, 2019
    Date of Patent: March 26, 2024
    Inventors: Gaurav Bansal, Hongsheng Lu, John Kenney, Toru Nakanishi
  • Patent number: 11937524
    Abstract: A method includes obtaining, by the treatment system configured to implement a machine learning (ML) algorithm, one or more images of a region of an agricultural environment near the treatment system, wherein the one or more images are captured from the region of a real-world where agricultural target objects are expected to be present, determining one or more parameters for use with the ML algorithm, wherein at least one of the one or more parameters is based on one or more ML models related to identification of an agricultural object, determining a real-world target in the one or more images using the ML algorithm, wherein the ML algorithm is at least partly implemented using the one or more processors of the treatment system, and applying a treatment to the target by selectively activating the treatment mechanism based on a result of the determining the target.
    Type: Grant
    Filed: September 15, 2022
    Date of Patent: March 26, 2024
    Assignee: Verdant Robotics, Inc.
    Inventors: Gabriel Thurston Sibley, Lorenzo Ibarria, Curtis Dale Garner, Patrick Christopher Leger, Dustin James Webb
  • Patent number: 11941725
    Abstract: In one embodiment, a method includes, by an operating system of a first artificial-reality device, receiving a notification that virtual objects are shared with the first artificial-reality device by a second artificial-reality device, where the virtual objects are shared by being placed inside a sender-side shared space anchored to a physical object. The method further includes the first artificial-reality device accessing descriptors of a physical object and a spatial-relationship definition between the physical object and a receiver-side shared space, detecting physical objects based on the descriptors, determining pose of the receiver-side shared space, detecting physical constraints within the receiver-side shares space, receiving display instructions for the virtual objects, and rendering the virtual objects on the first artificial-reality device in the receiver-side shared space.
    Type: Grant
    Filed: November 17, 2021
    Date of Patent: March 26, 2024
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Alexander Michael Louie, Michal Hlavac, Jasper Stevens
  • Patent number: 11941320
    Abstract: An electronic monitoring system has one or more imaging devices that can detect at least one triggering event comprising sound and motion and a controller that executes a program to categorize the triggering event as being located in a user-defined activity zone within the field of view and/or as being a taxonomic-based triggering event. Upon categorizing the triggering event, the system generates an output comprising a video component and an audio component. At least a portion of the audio component is modified if the triggering event is a categorized triggering event. Modification of the audio may include muting all or a portion of the audio component of the output.
    Type: Grant
    Filed: September 28, 2021
    Date of Patent: March 26, 2024
    Assignee: Arlo Technologies, Inc.
    Inventors: Rajinder Singh, John Thomas, Michael Harris, Dennis Aldover
  • Patent number: 11938963
    Abstract: A live map system may be used to propagate observations collected by autonomous vehicles operating in an environment to other autonomous vehicles and thereby supplement a digital map used in the control of the autonomous vehicles. In addition, a live map system in some instances may be used to propagate location-based teleassist triggers to autonomous vehicles operating within an environment. A location-based teleassist trigger may be generated, for example, in association with a teleassist session conducted between an autonomous vehicle and a remote teleassist system proximate a particular location, and may be used to automatically trigger a teleassist session for another autonomous vehicle proximate that location and/or to propagate a suggested action to that other autonomous vehicle.
    Type: Grant
    Filed: December 28, 2022
    Date of Patent: March 26, 2024
    Assignee: AURORA OPERATIONS, INC.
    Inventors: Niels Joubert, Benjamin Kaplan, Stephen O'Hara