Target Tracking Or Detecting Patents (Class 382/103)
  • Patent number: 11907674
    Abstract: Implementations relate to generating multi-modal response(s) through utilization of large language model(s) (LLM(s)). Processor(s) of a system can: receive natural language (NL) based input, generate a multi-modal response that is responsive to the NL based output, and cause the multi-modal response to be rendered. In some implementations, and in generating the multi-modal response, the processor(s) can process, using a LLM, LLM input (e.g., that includes at least the NL based input) to generate LLM output, and determine, based on the LLM output, textual content for inclusion in the multi-modal response and multimedia content for inclusion in the multi-modal response. In some implementations, the multimedia content can be obtained based on a multimedia content tag that is included in the LLM output and that is indicative of the multimedia content. In various implementations, the multimedia content can be interleaved between segments of the textual content.
    Type: Grant
    Filed: September 20, 2023
    Date of Patent: February 20, 2024
    Assignee: GOOGLE LLC
    Inventors: Oscar Akerlund, Evgeny Sluzhaev, Golnaz Ghiasi, Thang Luong, Yifeng Lu, Igor Petrovski, Ágoston Weisz, Wei Yu, Rakesh Shivanna, Michael Andrew Goodman, Apoorv Kulshreshtha, Yu Du, Amin Ghafouri, Sanil Jain, Dustin Tran, Vikas Peswani, YaGuang Li
  • Patent number: 11908243
    Abstract: Systems and methods are provided for performing operations comprising: capturing, by an electronic mirroring device, a video feed received from a camera of the electronic mirroring device, the video feed depicting a user; displaying, by one or more processors of the electronic mirroring device, one or more menu options on the video feed that depicts the user, the one or more menu options relating to a first level in a hierarchy of levels; detecting a gesture performed by the user in the video feed; and in response to detecting the gesture, displaying a set of options related to a given option of the one or more menu options, the set of options relating to a second level in the hierarchy of levels.
    Type: Grant
    Filed: March 16, 2021
    Date of Patent: February 20, 2024
    Assignee: Snap Inc.
    Inventors: Dylan Shane Eirinberg, Kyle Goodrich, Andrew James McPhee, Daniel Moreno
  • Patent number: 11908235
    Abstract: The embodiments of the present disclosure provide a method of registering a face based on video data, including: receiving video data; acquiring a first image frame sequence from the video data, wherein each image frame in the first image frame sequence includes a face detection frame containing a complete facial feature; determining whether each image frame reaches a preset definition or not according to a relative position of the face detection frame in the image frame; extracting a plurality of sets of facial features based on an image information of the plurality of face detection frames in response to determining that the image frame reaches the preset definition, and determining whether the faces represent an object or not according to the plurality of sets of facial features; and registering the object according to the first image frame sequence in response to determining that the faces represent the object.
    Type: Grant
    Filed: December 25, 2020
    Date of Patent: February 20, 2024
    Assignee: BOE TECHNOLOGY GROUP CO., LTD.
    Inventor: Jingtao Xu
  • Patent number: 11907901
    Abstract: In some embodiments, systems and methods are provided to recognize retail products, comprising: a model training system configured to: identify a customer; access an associated customer profile; access and apply a set of filtering rules to a product database based on customer data; generate a listing of products specific to the customer; access and apply a model training set of rules to train a machine learning model based on the listing of products and corresponding image data for each of the products in the listing of products; and communicate the trained model to the portable user device associated with first customer.
    Type: Grant
    Filed: January 26, 2023
    Date of Patent: February 20, 2024
    Assignee: Walmart Apollo, LLC
    Inventors: Michael A. Garner, Priyanka Paliwal
  • Patent number: 11908206
    Abstract: Method for estimating a vertical offset of road segment for vehicle is disclosed. The method includes tracking road reference on road segment at first moment in time from first observation point in order to form first representation. Method includes obtaining vehicle movement data over time period extending from first moment in time to second moment in time, tracking road reference on road segment at second moment in time from second observation point in order to form second representation.
    Type: Grant
    Filed: February 24, 2021
    Date of Patent: February 20, 2024
    Assignee: Zenuity AB
    Inventor: Staffan Wranne
  • Patent number: 11908216
    Abstract: Disclosed herein is a method of identifying a line of a stave or a stem of a note in a digital image of a musical score comprising: converting the digital image into a matrix in which at least one cell of the matrix corresponds to a pixel of the digital image; setting the at least one cell of the matrix to a first value if the corresponding pixel of the digital image has a pixel intensity above a threshold intensity; identifying adjacent cells having the first value as linked cells, the adjacent cells corresponding to pixels being adjacent in one of a horizontal direction or a vertical direction of the digital image; identifying linked cells having a number of cells exceeding a threshold as a chain of cells; grouping adjacent chains of cells into a group of chains; determining a dimension of the group of chains; and comparing the dimension with a dimension threshold; wherein, if the dimension is above the dimension threshold, determining that pixels corresponding to linked cells of the group of chains corresp
    Type: Grant
    Filed: March 26, 2019
    Date of Patent: February 20, 2024
    Assignee: Nkoda Limited
    Inventor: Sundar Venkitachalam
  • Patent number: 11908207
    Abstract: A method for detecting road diseases by intelligent cruise via an unmanned aerial vehicle (UAV), the UAV and a detecting system therefor are provided. The method for detecting road diseases by intelligent cruise via UAV, wherein a road disease detection model and a road recognition model based on deep learning network are built in the UAV, wherein the method specifically comprises a step of: automatically flying the UAV on a predetermined route on the actual road determined by the road recognition model, and obtaining road disease test results by the road surface disease detection model. The present invention adopts the road recognition model and road disease detection model based on deep learning network, which can realize automatic cruise and automatic road disease detection, only need to set a predetermined route or area range, which is convenient and fast.
    Type: Grant
    Filed: October 29, 2021
    Date of Patent: February 20, 2024
    Assignees: BeSTDR Infrastructure Hospital (Pingyu), SAFEKEY Engineering Technology (Zhengzhou), Ltd.
    Inventors: Hongyuan Fang, Niannian Wang, Duo Ma, Juan Zhang, Jiaxiu Dong, Binghan Xue, Haobang Hu, Jianwei Lei
  • Patent number: 11907339
    Abstract: As agents move about a materials handling facility, tracklets representative of the position of each agent are maintained along with a confidence score indicating a confidence that the position of the agent is known. If the confidence score falls below a threshold level, image data of the agent associated with the low confidence score is obtained and processed to generate one or more embedding vectors representative of the agent at a current position. Those embedding vectors are then compared with embedding vectors of other candidate agents to determine a set of embedding vectors having a highest similarity. The candidate agent represented by the set of embedding vectors having the highest similarity score is determined to be the agent and the position of that candidate agent is updated to the current position, thereby re-identifying the agent.
    Type: Grant
    Filed: July 8, 2022
    Date of Patent: February 20, 2024
    Assignee: Amazon Technologies, Inc.
    Inventors: Behjat Siddiquie, Tian Lan, Jayakrishnan Eledath, Hoi Cheung Pang
  • Patent number: 11907269
    Abstract: In an approach for detecting non-obvious relationships between entities from visual data sources, a processor calculates a co-occurrence frequency score for an entity pair from visual data. A processor calculates a distance proximity score for the entity pair from the visual data. A processor determines an event type in the visual data. A processor determines a timeline relationship in the visual data. A processor calculates a relationship score based on the co-occurrence frequency score, the distance proximity score, the event type, and the timeline relationship. A processor detects a relationship between the entity pair based on the relationship score.
    Type: Grant
    Filed: December 1, 2020
    Date of Patent: February 20, 2024
    Assignee: International Business Machines Corporation
    Inventors: Srinivasan S. Muthuswamy, Mukesh Kumar, Subhendu Das
  • Patent number: 11906533
    Abstract: A chemical sensing system is described. The chemical sensing system can include a plurality of sensors arranged on at least one substrate. The sensors may have differing sensitivities to sense different analytes, and may each be configured to output a signal in response to sensing one or more of the different analytes. The chemical sensing system can further include a computer processor programmed to receive the signals output from the plurality of sensors. The computer processor may be further programmed to determine a concentration of analytes in the sample based, at least in part, on the received signals and a model relating the signals or information derived from the signals to an output representation having bases corresponding to analytes.
    Type: Grant
    Filed: June 28, 2019
    Date of Patent: February 20, 2024
    Assignee: Stratuscent Inc.
    Inventors: Mojtaba Khomami Abadi, Amir Bahador Gahroosi, Matthew V. Gould, Ashok Prabhu Masilamani
  • Patent number: 11899749
    Abstract: In various examples, training methods as described to generate a trained neural network that is robust to various environmental features. In an embodiment, training includes modifying images of a dataset and generating boundary boxes and/or other segmentation information for the modified images which is used to train a neural network.
    Type: Grant
    Filed: March 15, 2021
    Date of Patent: February 13, 2024
    Assignee: NVIDIA CORPORATION
    Inventors: Subhashree Radhakrishnan, Partha Sriram, Farzin Aghdasi, Seunghwan Cha, Zhiding Yu
  • Patent number: 11900668
    Abstract: The invention relates to a system for identifying at least one object at least partially immerged in a water area, said system comprising a capturing module comprising at least one camera, said at least one camera being configured to generate at least one sequence of images of said water area, and a processing module being configured to receive at least one sequence of images from said at least one camera and comprising at least one artificial neural network, said at least one artificial neural network being configured to detect at least one object in said at least one received sequence of images, extract a set of features from said at least one detected object, compare said extracted set of features with at least one predetermined set of features associated with a predefined object, identify the at least one detected object when the extracted set of features matches with the at least one predetermined set of features.
    Type: Grant
    Filed: September 27, 2019
    Date of Patent: February 13, 2024
    Assignee: SEA.AI GMBH
    Inventor: Raphaël Biancale
  • Patent number: 11902677
    Abstract: An image sensor suitable for use in an augmented reality system to provide low latency image analysis with low power consumption. The augmented reality system can be compact, and may be small enough to be packaged within a wearable device such as a set of goggles or mounted on a frame resembling ordinary eyeglasses. The image sensor may receive information about a region of an imaging array associated with a movable object and selectively output imaging information for that region. The region may be updated dynamically as the image sensor and/or the object moves. Such an image sensor provides a small amount of data from which object information used in rendering an augmented reality scene can be developed. The amount of data may be further reduced by configuring the image sensor to output indications of pixels for which the measured intensity of incident light changes.
    Type: Grant
    Filed: October 30, 2019
    Date of Patent: February 13, 2024
    Assignee: Magic Leap, Inc.
    Inventors: Martin Georg Zahnert, Alexander Ilic, Erik Fonseka
  • Patent number: 11900667
    Abstract: Embodiments may provide improved techniques for object detection so as to improve the finding of objects and the accuracy of the boundary predictions using parametric curves defined by multiple control points. For example, in an embodiment, a method may be implemented in a computer system comprising a processor, memory accessible by the processor, and computer program instructions stored in the memory and executable by the processor, the method may comprise receiving an image, extracting from the image a plurality of features related to objects shown in the image, generating, from the extracted features, at least one plurality of points representing a parametric curve bounding an object shown in the image; and outputting the plurality of points representing the parametric curve.
    Type: Grant
    Filed: April 28, 2021
    Date of Patent: February 13, 2024
    Assignee: International Business Machines Corporation
    Inventors: Yoel Shoshan, Vadim Ratner
  • Patent number: 11900707
    Abstract: A skeletal information threshold setting apparatus includes: a joint information input unit that accepts an input of important joints among joints of a subject and a confidence threshold for the important joints; and a threshold setting unit that acquires a confidence threshold for each of multiple joints of the subject, including the important joints, based on the important joints and the confidence threshold for the important joints that were input, and sets the acquired confidence thresholds for the joints as thresholds to be used in making a determination regarding a skeletal estimation result for the subject.
    Type: Grant
    Filed: May 16, 2019
    Date of Patent: February 13, 2024
    Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Akio Kameda, Megumi Isogai, Hideaki Kimata
  • Patent number: 11900633
    Abstract: A system and a method include an imaging device configured to capture one or more images of a first vehicle. A control unit is in communication with the imaging device. A model database is in communication with the control unit. The model database stores a three-dimensional (3D) model of the first vehicle. The control unit is configured to receive image data regarding the one or more images of the first vehicle from the imaging device and analyze the image data with respect to the 3D model of the first vehicle to determine a pose of the first vehicle.
    Type: Grant
    Filed: December 7, 2021
    Date of Patent: February 13, 2024
    Assignee: The Boeing Company
    Inventor: Robert M. Cramblitt
  • Patent number: 11900729
    Abstract: An eyewear having an electronic processor configured to identify a hand gesture including a sign language, and to generate speech that is indicative of the identified hand gesture. The electronic processor uses a convolutional neural network (CNN) to identify the hand gesture by matching the hand gesture in the image to a set of hand gestures, wherein the set of hand gestures is a library of hand gestures stored in a memory. The hand gesture can include a static hand gesture, and a moving hand gesture. The electronic processor is configured to identify a word from a series of hand gestures.
    Type: Grant
    Filed: November 18, 2021
    Date of Patent: February 13, 2024
    Assignee: SNAP INC.
    Inventors: Ryan Chan, Brent Mills, Eitan Pilipski, Jennica Pounds, Elliot Solomon
  • Patent number: 11900249
    Abstract: In a case where the operation program is started, a CPU of the mini-batch learning apparatus functions as a calculation unit, a specifying unit, and an update unit. The calculation unit calculates an area ratio of each of a plurality of classes in mini-batch data. The specifying unit specifies a rare class of which the area ratio is lower than a setting value. The update unit sets an update level of the machine learning model in a case where the rare class is specified by the specifying unit to be lower than an update level of the machine learning model in a case where the rare class is not specified by the specifying unit.
    Type: Grant
    Filed: June 2, 2021
    Date of Patent: February 13, 2024
    Assignee: FUJIFILM Corporation
    Inventor: Takashi Wakui
  • Patent number: 11899722
    Abstract: To improve accuracy of search, a learner (L) of a search system calculates a feature quantity of information that is input and outputs a first analysis result of the information in a first viewpoint and a second analysis result of the information in a second viewpoint based on the feature quantity. Storing means stores a feature quantity of information to be searched, which has been input in the learner (L), in a database. Input means inputs input information in the learner (L). Search means searches for information to be searched that is similar to the input information in the feature quantity based on the database.
    Type: Grant
    Filed: June 20, 2018
    Date of Patent: February 13, 2024
    Assignee: RAKUTEN GROUP, INC.
    Inventor: Yeongnam Chae
  • Patent number: 11899132
    Abstract: Embodiments provided herein allow for identification of one or more regions of interest in a radar return signal that would be suitable for selected application of super-resolution processing. One or more super-resolution processing techniques can be applied to the identified regions of interest. The selective application of super-resolution processing techniques can reduce processing requirements and overall system delay. The output data of the super-resolution processing can be provided to a mobile computer system. The output data of the super-resolution processing can also be used to reconfigure the radar radio frequency front end to beam form the radar signal in region of the detected objects. The mobile computer system can use the output data for implementation of deep learning techniques. The deep learning techniques enable the vehicle to identify and classify detected objects for use in automated driving processes.
    Type: Grant
    Filed: September 23, 2020
    Date of Patent: February 13, 2024
    Assignee: QUALCOMM Incorporated
    Inventors: Makesh Pravin John Wilson, Volodimir Slobodyanyuk, Sundar Subramanian, Radhika Dilip Gowaikar, Michael John Hamilton, Amin Ansari
  • Patent number: 11897702
    Abstract: A parcel transfer system transfers a parcel or a similar article directly between a conveyor and a self-driving vehicle (“SDV”) while the conveyor and SDV are moving. The conveyor, SDV, or both may be configured to initially transport a parcel in a first direction of travel and then subsequently offload the parcel in a second direction of travel. The SDV is configured to travel alongside of the conveyor in the first direction of travel and either receive the parcel as it is offloaded from the conveyor or offload the parcel onto the conveyor in the second direction of travel. The parcel transfer system further includes a vision and control subsystem, which regulates movement of the SDV and offloading of the parcel from the conveyor or SDV.
    Type: Grant
    Filed: January 6, 2021
    Date of Patent: February 13, 2024
    Assignee: Material Handling Systems, Inc.
    Inventors: Michael Thomas Fleming, Robertus Arnoldus Adrianus Schmit
  • Patent number: 11900795
    Abstract: The present invention relates to a pedestrian device and a traffic safety assistance method which can effectively and properly support pedestrian's safety confirmation by utilizing vehicle-to-pedestrian communications and an AR device. A pedestrian device of the present invention includes: an ITS communication device 21 (pedestrian-vehicle communication device) configured to perform vehicle-to-pedestrian communications with an in-vehicle terminal 2; a processor 32 configured to determine if there is a risk of collision based on information transmitted to and received from the in-vehicle device, and control provision of an alert to a user of the pedestrian device; and an AR display 26 for displaying a virtual object overlaid on a real space which can be seen by the user, wherein the processor controls display of the virtual object (virtual terminal) on the AR display as an alert operation to provide an alert to the user.
    Type: Grant
    Filed: May 18, 2020
    Date of Patent: February 13, 2024
    Assignee: Panasonic Intellectual Property Management Co., Ltd.
    Inventors: Tsuyoshi Ueno, Masahito Sugahara, Shintaro Muramatsu, Yoshiyuki Okubo
  • Patent number: 11900701
    Abstract: A left object detection device for detecting an object left behind in a monitoring target area includes an acquisition section which acquires an image of the area from an image pickup device for picking up an image of the monitoring target area, a processing section, and an output section which outputs in accordance with a result of the processing section. Upon detection of the object which has been kept static for a given time period as an article from the image acquired by the acquisition section, the processing section starts a timer for counting time from detection of the article, determines whether or not a person who owned the article matches a person in the image acquired by the acquisition section, stops the timer upon determination that they match, and does not stop the timer upon determination that they do not match. The processing section determines whether or not the timer counts in excess of a predetermined time period.
    Type: Grant
    Filed: March 1, 2019
    Date of Patent: February 13, 2024
    Assignee: Hitachi, Ltd.
    Inventors: Hiroshi Okatani, Hiroaki Koiwa, Noriharu Amiya
  • Patent number: 11898869
    Abstract: A system for generating a map of an environment, the system including a plurality of agents that acquire mapping data captured by a mapping system including a range sensor. The mapping data is indicative of a three dimensional representation of the environment and is used to generate frames representing parts of the environment. The agents receive other frame data from other agents, which is indicative of other frames representing parts of the environment generated using other mapping data captured by a mapping system of the other agents. Each agent then generates a graph representing a map of the environment by generating nodes using the frames and other frames, each node being indicative of a respective part of the environment, and calculating edges interconnecting the nodes, the edges being indicative of spatial offsets between the nodes.
    Type: Grant
    Filed: August 9, 2021
    Date of Patent: February 13, 2024
    Assignee: Commonwealth Scientific and Industrial Research Organisation
    Inventor: Gavin Catt
  • Patent number: 11897734
    Abstract: The present disclosure presents systems and methods for assisting a crane operator. An imaging system automatically images each building material object contained within a shakeout field on a construction site and processes the images to determine each object's identifying indicium and its geometric properties. These values are compared with a construction site database to ensure that all necessary building materials are present. A positioning device tracks, in real time, the location of the imaging system, the location of structural members within the shakeout field, and/or other important features of the job site, such as obstacles that the crane operator will need to avoid while lifting a structural member from its initial location to its destination location.
    Type: Grant
    Filed: February 15, 2023
    Date of Patent: February 13, 2024
    Assignee: Structural Services, Inc.
    Inventors: James T. Benzing, William R. Haller, Seth T. Slavin, Marta Kasica-Soltan
  • Patent number: 11893750
    Abstract: A machine-learning (ML) architecture for determining three or more outputs, such as a two and/or three-dimensional region of interest, semantic segmentation, direction logits, depth data, and/or instance segmentation associated with an object in an image. The ML architecture may output these outputs at a rate of 30 or more frames per second on consumer grade hardware.
    Type: Grant
    Filed: December 31, 2019
    Date of Patent: February 6, 2024
    Assignee: ZOOX, INC.
    Inventors: Kratarth Goel, James William Vaisey Philbin, Praveen Srinivasan, Sarah Tariq
  • Patent number: 11893704
    Abstract: An image processing device according to one embodiment estimates optical flow information, pixel by pixel, on the basis of a reference image and input images of consecutive frames, and estimates a term corresponding to temporal consistency between the frames of the input images. The image processing device determines a mesh on the basis of the term corresponding to temporal consistency and the optical flow information, and transforms the reference image on the basis of the mesh. The image processing device preforms image blending on the basis of the input image, the transformed reference image, and mask data.
    Type: Grant
    Filed: February 21, 2019
    Date of Patent: February 6, 2024
    Assignee: KOREA ADVANCED INSTITUTE OF SCIENCE AND TECHNOLOGY
    Inventors: Junyong Noh, Hanui Lee, Bumki Kim, Gukho Kim, Julie Alfonsine E. Lelong, Mohammad Reza Karimi Dastjerdi, Allen Kim, Jiwon Lee
  • Patent number: 11893748
    Abstract: An image processing apparatus includes a memory configured to store first region detection information of a first frame; and a processor. The processor is configured to: identify a first pixel region corresponding to the first region detection information from a second frame, perform region growing processing on the first pixel region based on an adjacent pixel region that is adjacent to the first pixel region, obtain second region detection information of the second frame, based on the region growing processing, and perform image processing on the second frame based on the second region detection information.
    Type: Grant
    Filed: September 16, 2022
    Date of Patent: February 6, 2024
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventor: Hyungjun Lim
  • Patent number: 11893783
    Abstract: A neural processing unit (NPU) for decoding video or feature map is provided. The NPU may comprise at least one processing element (PE) to perform an inference using an artificial neural network. The at least one PE may be configured to receive and decode data included in a bitstream. The data included in the bitstream may comprise data of a base layer. Alternatively, the data included in the bitstream may comprise data of the base layer and data of at least one enhancement layer. The data of the base layer included in the bitstream may include a first feature map. The data of the at least one enhancement layer included in the bitstream may include a second feature map.
    Type: Grant
    Filed: May 15, 2023
    Date of Patent: February 6, 2024
    Assignee: DEEPX CO., LTD.
    Inventors: Lok Won Kim, Ha Joon Yu
  • Patent number: 11893152
    Abstract: Techniques are provided for sentiment-based adaptations of user representations in virtual environments. One method comprises obtaining information characterizing a first user and a second user interacting with a session of a virtual environment; applying the information to an analytics engine to obtain a sentiment status indicating a sentiment of the second user in the virtual environment, wherein the analytics engine processes the obtained sentiment status of the second user to select an adaptation of a facial expression and/or a body language of a representation of the first user from learned sentiment-based facial expression examples and/or learned sentiment-based body language examples, using the obtained sentiment status of the second user; and automatically initiating a rendering of the virtual environment using the selected adaptation of the facial expression and/or the body language of the representation of the first user in the virtual environment.
    Type: Grant
    Filed: February 15, 2023
    Date of Patent: February 6, 2024
    Assignee: Dell Products L.P.
    Inventors: Ofir Ezrielev, Or Herman Saffar, John Lawrence Dalton, Noga Gershon
  • Patent number: 11893084
    Abstract: Disclosed herein is an object detection system, including apparatuses and methods for object detection. An implementation may include receiving a first image frame from an ROI detection model that generated a first ROI boundary around a first object detected in the first image frame and subsequently receiving a second image frame. The implementation further includes predicting, using an ROI tracking model, that the first ROI boundary will be present in the second image frame and then detecting whether the first ROI boundary is in fact present in the second image frame. The implementation includes determining that the second image frame should be added to a training dataset for the ROI detection model when detecting that the ROI detection model did not generate the first ROI boundary in the second image frame as predicted and re-training the ROI detection model using the training dataset.
    Type: Grant
    Filed: September 7, 2021
    Date of Patent: February 6, 2024
    Assignee: JOHNSON CONTROLS TYCO IP HOLDINGS LLP
    Inventors: Santle Camilus Kulandai Samy, Rajkiran Kumar Gottumukkal, Yohai Falik, Rajiv Ramanasankaran, Prantik Sen, Deepak Chembakassery Rajendran
  • Patent number: 11893805
    Abstract: The present disclosure relates to a service system and method for detecting the number of in-vehicle persons of a vehicle in a high occupancy vehicle lane. In more detail, the present disclosure relates to a service system and method for detecting the number of in-vehicle persons of a vehicle in a high occupancy vehicle lane, the service system and method detecting a vehicle violating the law about use of a high occupancy vehicle lane by calculating the number of in-vehicle persons of a vehicle using a high occupancy vehicle lane by tracking the number of in-vehicle persons on the basis of images of the vehicle taken by a camera disposed diagonally to the traveling direction of the vehicle and a camera disposed orthogonally to the traveling direction of the vehicle.
    Type: Grant
    Filed: October 24, 2023
    Date of Patent: February 6, 2024
    Assignee: G&T Solutions, Inc.
    Inventors: Heedon Yoon, Yujeong Choe, Sangcheol Kang, Inwon Yeon
  • Patent number: 11887292
    Abstract: The present invention discloses a two-step anti-fraud vehicle insurance image collecting and quality testing method, system and device, the method comprises: step 1, collecting vehicle insurance scene images and marking vehicle orientation; step 2, performing object detection on the collected vehicle insurance scene images and screening to obtain object coordinates; step 3, according to the vehicle orientation and the object coordinates, obtaining the specific position of the object coordinates located in the whole vehicle; step 4, according to the object coordinates screened in step 2, performing vehicle component detection on the vehicle insurance scene images, obtaining the component coordinates of the vehicle components, and screening to obtain the vehicle component closest to the object coordinates; step 5, according to the specific position of the object coordinates located in the whole vehicle and the vehicle components closest to the object coordinates, obtaining the position of the vehicle components
    Type: Grant
    Filed: May 13, 2023
    Date of Patent: January 30, 2024
    Assignee: ZHEJIANG LAB
    Inventors: Jinni Dong, Jiaxi Yang, Kai Ding, Chongning Na
  • Patent number: 11887381
    Abstract: A method of predicting lane line types utilizing a heterogeneous convolutional neural network (HCNN) includes capturing an input image with one or more optical sensors disposed on a host member, passing the input image through the HCNN, the HCNN having at least three distinct sub-networks, the three distinct sub-networks: predicting object locations in the input image with a first sub-network; predicting lane line locations in the input image with a second sub-network; and predicting lane line types for each predicted lane line in the input image with a third sub-network.
    Type: Grant
    Filed: April 30, 2021
    Date of Patent: January 30, 2024
    Assignee: New Eagle, LLC
    Inventors: Iyad Faisal Ghazi Mansour, Akhil Umat
  • Patent number: 11887257
    Abstract: A method and an apparatus for virtual training based on tangible interaction are provided. The apparatus acquires data for virtual training, and acquires a three-dimensional position of a real object based on a depth image and color image of the real object and infrared (IR) data included in the obtained data. Then, virtualization of an overall appearance of a user is performed by extracting a depth from depth information on a user image included in the obtained data and matching the extracted depth with the color information, and depth data and color data for the user obtained according to virtualization of the user is visualized in virtual training content. In addition, the apparatus performs correction on joint information using the joint information and the depth information included in the obtained data, estimates a posture of the user using the corrected joint information, and estimates a posture of a training tool using the depth information and IR data included in the obtained data.
    Type: Grant
    Filed: November 17, 2021
    Date of Patent: January 30, 2024
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Seong Min Baek, Youn-Hee Gil, Cho-Rong Yu, Hee Sook Shin, Sungjin Hong
  • Patent number: 11889180
    Abstract: A photographing method and an electronic device are provided, so that a to-be-photographed target can continue to be tracked after the to-be-photographed target returns to a shooting image, to improve accuracy of focusing performed during photographing of a moving object. The method includes: displaying a first image including a first object and a tracking indicator that is associated with the first object, the tracking indicator indicating that the first object is a tracked target; displaying a second image that does not include the first object or the tracking indicator; displaying a third image including the first object; automatically setting the first object as the tracked target and displaying the tracking indicator associated with the first object; and automatically focusing on the first object when displaying the third image.
    Type: Grant
    Filed: December 13, 2019
    Date of Patent: January 30, 2024
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Tao Shen, Jun Wang, Yang Li, Yanpeng Ma
  • Patent number: 11887379
    Abstract: Systems and method for machine-learning assisted road sign content prediction and machine learning training is disclosed. A sign detector model processes images or video with road signs. A visual attribute prediction model extracts visual attributes of the sign in the image. The visual attribute prediction model can communicate with a knowledge graph reasoner to validate the visual attribute prediction model by applying various rules to the output of the visual attribute prediction model. A plurality of potential sign candidates are retrieved that match the visual attributes of the image subject to the visual attribute prediction model, and the rules help to reduce the list of potential sign candidates and improve accuracy of the model.
    Type: Grant
    Filed: August 17, 2021
    Date of Patent: January 30, 2024
    Inventors: Ji Eun Kim, Mohammad Sadegh Norouzzadeh, Kevin H. Huang, Shashank Shekhar
  • Patent number: 11887385
    Abstract: Performing object localization inside a cabin of a vehicle is provided. A camera image is received of a cabin of a vehicle by an image sensor at a first location with respect to a plurality of seating zones of a vehicle. Object detection on the camera image is performed to identify one or more objects in the camera image. A machine-learning model trained on images taken at the first location is utilized to place the one or more objects into the seating zones of the vehicle according to a plurality of bounding boxes corresponding to the plurality of seating zones for the first location.
    Type: Grant
    Filed: July 19, 2021
    Date of Patent: January 30, 2024
    Assignee: FORD GLOBAL TECHNOLOGIES, LLC
    Inventors: Faizan Shaik, Robert Parenti, Medha Karkare
  • Patent number: 11887383
    Abstract: Vehicle interior object management uses analysis for detection of an object within a vehicle. The object can include a cell phone, a computing device, a briefcase, a wallet, a purse, or luggage. The object can include a child or a pet. A distance between an occupant and the object can be calculated. The object can be within a reachable distance of the occupant. Two or more images of a vehicle interior are collected using imaging devices within the vehicle. The images are analyzed to detect an object within the vehicle. The object is classified. A level of interaction is estimated between an occupant of the vehicle and the object within the vehicle. The object can be determined to have been left behind once the occupant leaves the vehicle. A control element of the vehicle is changed based on the classifying and the level of interaction.
    Type: Grant
    Filed: August 28, 2020
    Date of Patent: January 30, 2024
    Assignee: Affectiva, Inc.
    Inventors: Panu James Turcot, Rana el Kaliouby, Abdelrahman N. Mahmoud, Mohamed Ezzeldin Abdelmonem Ahmed Mohamed, Andrew Todd Zeilman, Gabriele Zijderveld
  • Patent number: 11886645
    Abstract: Disclosed herein are systems, devices, and processes for gesture detection. A method includes capturing a series of images. The method includes generating motion isolation information based on the series of images. The method includes generating a composite image based on the motion isolation information. The method includes determining a gesture based on the composite image. The processes described herein may include the use of convolutional neural networks on a series of time-related images to perform gesture detection on embedded systems or devices.
    Type: Grant
    Filed: December 20, 2021
    Date of Patent: January 30, 2024
    Assignee: ALPINE ELECTRONICS OF SILICON VALLEY, INC.
    Inventors: Diego Rodriguez Risco, Samir El Aouar, Alexander Joseph Ryan
  • Patent number: 11880212
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for determining that current data captured at a current location of a drone satisfies localization adjustment criteria; in response to determining that the current data captured at the current location of the drone satisfies the localization adjustment criteria, identifying previously captured image data; determining a previous expected location of the drone based on both an expected change in location of the drone and a first previous location determined from other image data captured before the previously captured image data; determining a location difference between the previous expected location of the drone and a second previous location determined from the previously captured image data; and determining the current location of the drone based on the location difference.
    Type: Grant
    Filed: July 9, 2021
    Date of Patent: January 23, 2024
    Assignee: Alarm.com Incorporated
    Inventors: Donald Gerard Madden, Babak Rezvani, Ahmad Seyfi, Timon Meyer, Glenn Tournier
  • Patent number: 11880509
    Abstract: Systems and methods herein describe using a neural network to identify a first set of joint location coordinates and a second set of joint location coordinates and identifying a three-dimensional hand pose based on both the first and second sets of joint location coordinates.
    Type: Grant
    Filed: January 9, 2023
    Date of Patent: January 23, 2024
    Assignee: SNAP INC.
    Inventors: Yuncheng Li, Jonathan M. Rodriguez, II, Zehao Xue, Yingying Wang
  • Patent number: 11879768
    Abstract: Methods and apparatus to generate an augmented environment including a weight indicator for a vehicle are disclosed herein. An example apparatus disclosed herein includes memory including stored instructions, a processor to execute the instructions to generate a map of loads on a vehicle based on load data associated with a sensor of the vehicle, determine a load condition of the vehicle based on the map of loads, correlate a first load of the map of loads with an object identified using live video data received from a camera, and generate an augmented environment identifying at least one of a location of the object, the first load correlated with the object, or the load condition.
    Type: Grant
    Filed: July 29, 2021
    Date of Patent: January 23, 2024
    Assignee: Ford Global Technologies, LLC
    Inventors: Anton Rogness, Peter Simeon Lazarevski, Joshua Rajasingh, Andrew Niedert, Elliott Pearson
  • Patent number: 11880504
    Abstract: A system including a display, an eye tracker, and a processor that executes the following: operate the display to present multiple objects to a user; operate the eye tracker to track an eye gaze of the user during the presentation of the multiple objects; calculate an estimated user preference of each of the multiple objects, by performing at least one of: (a) calculating a measure of central tendency of distances from a center of the respective object to points of the eye gaze on the respective object, wherein a higher measure of central tendency of the distances indicates a stronger user preference and vice versa, and (b) calculating a measure of central tendency of speeds of transition between temporally-consecutive ones of the points of the eye gaze, wherein a higher measure of central tendency of the speeds indicates a stronger user preference and vice versa.
    Type: Grant
    Filed: September 28, 2022
    Date of Patent: January 23, 2024
    Assignee: Ramot at Tel-Aviv University Ltd.
    Inventors: Tom Schonberg, Michal Gabay
  • Patent number: 11880541
    Abstract: Methods, systems, computer-readable media, and apparatuses for generating an Augmented Reality (AR) object are presented. The apparatus can include memory and one or more processors coupled to the memory. The one or more processors can be configured to receive an image of at least a portion of a real-world scene including a target object. The one or more processors can also be configured to generate an AR object corresponding to the target object and including a plurality of parts. The one or more processors can further be configured to receive a user input associated with a designated part of the plurality of parts and manipulate the designated part based on the received user input.
    Type: Grant
    Filed: August 13, 2021
    Date of Patent: January 23, 2024
    Assignee: QUALCOMM Incorporated
    Inventors: Raphael Grasset, Hartmut Seichter
  • Patent number: 11882387
    Abstract: The method comprising determining a set of coordinates each for two or more appearances of a target subject within a sequence of images, the set of coordinates of the two or more appearances of the target subject defining a first path; determining a set of coordinates each for two or more appearances of a related subject within a sequence of images, the related subject relating to the target subject, the set of coordinates of the two or more appearances of the related subject defining a second path; determining one or more minimum distances between the first path and the second path so as to determine at least a region of interest; determining a timestamp of a first appearance and a timestamp of a last appearance of the target subject; and determining a timestamp of a first appearance and a timestamp of a last appearance of the related subject.
    Type: Grant
    Filed: August 20, 2019
    Date of Patent: January 23, 2024
    Assignee: NEC CORPORATION
    Inventors: Hui Lam Ong, Satoshi Yamazaki, Hong Yen Ong, Wei Jian Peh
  • Patent number: 11880965
    Abstract: A method for obtaining a Fourier ptychography image using a smartphone comprises the steps of: (a) sequentially providing illumination of different angles to the sample by sequentially displaying, according to a first pattern composed of point light sources at different positions, the point light sources of the first pattern on a display of the smartphone; (b) obtaining an image for each illumination angle of the sample using a camera of the smartphone whenever illumination of different angles is provided by the point light sources of the first pattern; and (c) restoring a first Fourier ptychography image using a plurality of images for each illumination angle obtained using the camera of the smartphone.
    Type: Grant
    Filed: February 14, 2022
    Date of Patent: January 23, 2024
    Assignee: INDUSTRY-ACADEMIC COOPERATION FOUNDATION, YONSEI UNIVERSITY
    Inventors: Seung Ah Lee, Kyung Chul Lee, Kyungwon Lee, Jaewoo Jung, Se Hee Lee
  • Patent number: 11880995
    Abstract: Techniques for auto-locating and positioning relative to an aircraft are disclosed. An example method can include a robot receiving a multi-dimensional representation of an enclosure that includes a candidate target aircraft. The robot can extract a geometric feature from the multi-dimensional representation associated with the candidate target aircraft. The robot can compare the geometric feature of the candidate target aircraft with a second geometric feature from a reference model of a target aircraft. The robot can determine whether the candidate target aircraft is the target aircraft based on the comparison. The robot can calculate a path from a location of the robot to the target aircraft based on the determination. The robot can traverse the path from the location to the target aircraft based on the calculation.
    Type: Grant
    Filed: June 21, 2023
    Date of Patent: January 23, 2024
    Assignee: Wilder Systems Inc.
    Inventors: Spencer Voiss, William Wilder
  • Patent number: 11881034
    Abstract: Erroneous detection due to erroneous parallax measurement is suppressed to accurately detect a step present on a road. An in-vehicle environment recognition device 1 includes a processing device that processes a pair of images acquired by a stereo camera unit 100 mounted on a vehicle.
    Type: Grant
    Filed: December 25, 2020
    Date of Patent: January 23, 2024
    Assignee: HITACHI ASTEMO, LTD.
    Inventors: Masayuki Takemura, Takeshi Shima, Haruki Matono
  • Patent number: 11881143
    Abstract: In particular embodiments, a computing system of a device may determine a display peak power budget allocated for a display component of the device. The system may determine display information including display workload and display telemetry associated with the display component. The system may determine, in accordance with a display peak power management policy applied to the display peak power budget and the display information, one or more display-controlling parameters for maintaining the display component to operate within the display peak power budget. The system may determine, based on the one or more display-controlling parameters, a plurality of grayscales for a plurality of regions on a display screen of the device. The system may adjust a rendered frame based on the plurality of grayscales and output the adjusted rendered frame on the display screen of the device.
    Type: Grant
    Filed: October 12, 2021
    Date of Patent: January 23, 2024
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Nilanjan Goswami, Eugene Gorbatov, Steve John Clohset, Michael Yee