Patents Examined by Michelle M Entezari Hausmann
  • Patent number: 12243286
    Abstract: Provided are a method and apparatus for detecting a pattern image, and more particularly, a pattern image detection method and a pattern image detection apparatus for effectively detecting a pattern of a template image from a target image. The pattern image detection method and the pattern image detection apparatus can provide an effect of quickly and accurately detecting the pattern of the template image from the target image while reducing the amount of computation for detecting the pattern of the template image.
    Type: Grant
    Filed: April 21, 2022
    Date of Patent: March 4, 2025
    Assignee: Fourth Logic Incorporated
    Inventors: Jong Hyun Song, Kang Cho
  • Patent number: 12238445
    Abstract: The present disclosure is an apparatus and a method for providing a precise motion estimation learning model including, a database unit which stores a standard dataset labeled according to a first number of key points, an animation dataset labeled according to a second number of key points which is larger than the first number, and a photorealistic dataset having the second number of key points, a standard learning unit which learns the standard dataset for motion estimation to generate a standard learning model, an animation learning unit which retrains the animation dataset based on a weight of the standard learning model to generate an animation learning model, and a motion estimation learning unit which trains the photorealistic dataset based on the weight of the animation learning model to finely tune to generate a precise motion estimation learning model.
    Type: Grant
    Filed: August 30, 2022
    Date of Patent: February 25, 2025
    Assignee: Research & Business Foundation Sungkyunkwan University
    Inventors: Won Je Choi, Sung Hyun Choi, Hong Uk Woo
  • Patent number: 12230019
    Abstract: The present invention discloses a decoupling divide-and-conquer facial nerve segmentation method and device. As for the characteristics of a small facial nerve structure and a low contrast, a facial nerve segmentation model including a feature extraction module, a rough segmentation module, and a fine segmentation module is constructed. The feature extraction module is configured to extract a low-level feature and a plurality of different- and high-level features. The rough segmentation module is configured to globally search the different- and high-level features for facial-nerve features and fuse them. The fine segmentation module is configured to decouple a fused feature to obtain a central body feature. After the central body feature is combined with the low-level feature to obtain an edge-detail feature, a space attention mechanism is used to extract attention features from the central body feature and the edge-detail feature, to obtain a facial nerve segmentation image.
    Type: Grant
    Filed: February 28, 2022
    Date of Patent: February 18, 2025
    Assignee: ZHEJIANG UNIVERSITY
    Inventors: Jing Wang, Bo Dong, Hongjian He, Xiujun Cai
  • Patent number: 12217457
    Abstract: This disclosure is directed to, in part, mobile carts that are configured to determine their respective locations based on analysis of image data generated by cameras mounted to the respective carts. For instance, an example mobile cart may include at least one camera of a field-of-view directed substantially away from a cart and substantially towards an outward environment of the cart, such as toward an inventory location that houses one or more items. The mobile cart may generate image data representative of items housed at an inventory location adjacent to the cart and may use computer-vision techniques to analyze the image data and determine characteristics of these items. The mobile cart may then use this information to determine which section of multiple sections of a store in which the cart is currently located.
    Type: Grant
    Filed: September 30, 2021
    Date of Patent: February 4, 2025
    Assignee: Amazon Technologies, Inc.
    Inventors: Bruno Miranda Artacho, Vinod Krishnan Kulathumani, Sreemanananth Sadanand
  • Patent number: 12211293
    Abstract: A system receives identification of an item to be loaded into a vehicle and determines dimensions of the item. The system adds the item to a total load for an identified vehicle and, based on known dimensions of available space in the identified vehicle, determines one or more candidate placement options for each of one or more items comprising the total load, within available space within the vehicle, wherein candidate placement options for each item are determined so as not to overlap with placement options for another of the one or more items within a given configuration representing placement options for all of the one or more items comprising the total load. Additionally, the system presents a visual rendering of at least one given configuration that successfully allows the total load to fit within the available space.
    Type: Grant
    Filed: February 15, 2022
    Date of Patent: January 28, 2025
    Assignee: Ford Global Technologies, LLC
    Inventors: Keith Weston, Brendan F. Diamond, Michael Alan McNees, Jordan Barrett, Andrew Denis Lewandowski
  • Patent number: 12211242
    Abstract: A system and method are disclosed for using a local neighborhood for determining similar targets in different documents or using implicit coordinates for obtaining a coordinate location of a target. The local neighborhood method may include identifying a first target in a first document; identifying one or more first elements within a first distance range from the first target; creating a first local neighborhood based on the identifying; determining that that first local neighborhood is similar to a third local neighborhood in a second document; and determining a second target in the second document that corresponds to the first target in the first document, based on the determining the similarity. The implicit coordinates method may include performing OCR on the first document to find the first target; and obtaining a first location of the first target by using at least one of OCR or element recognition.
    Type: Grant
    Filed: April 21, 2022
    Date of Patent: January 28, 2025
    Assignee: Sureprep, LLC
    Inventors: David Wyle, Alex Sadovsky, Will Hosek, Andrew Bock
  • Patent number: 12205327
    Abstract: A device, system, and/or method may be used to provide an adaptive alignment. A first video data may be received. The first video data may comprise a first video frame and a second video frame. Sensor pose data may be determined. The pose data may be associated with the first video frame and the second video frame. An adjusted video frame may be determined based on the first video frame and a motion indicated by the pose data. A frame adjustment value may be determined by comparing a first pixel from the adjusted video frame to a second pixel from the second video frame. The frame adjustment value may correlate the pose data to the first video data. A second video data may be determined by applying the frame adjustment value.
    Type: Grant
    Filed: December 29, 2021
    Date of Patent: January 21, 2025
    Assignee: L3Harris Technologies, Inc.
    Inventors: Paul Emerson Bussard, Scott John Taylor, Scott Brian McMillan
  • Patent number: 12200398
    Abstract: An apparatus includes at least one processing device configured to obtain input frames from a video. The at least one processing device is also configured to generate a forward flow from a first input frame to a second input frame and a backward flow from the second input frame to the first input frame. The at least one processing device is further configured to generate an occlusion map at an interpolated frame coordinate using the forward flow and the backward flow. The at least one processing device is also configured to generate a consistency map at the interpolated frame coordinate using the forward flow and the backward flow. In addition, the at least one processing device is configured to perform blending using the occlusion map and the consistency map to generate an interpolated frame at the interpolated frame coordinate.
    Type: Grant
    Filed: February 2, 2022
    Date of Patent: January 14, 2025
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Gyeongmin Choe, Yingmao Li, John Seokjun Lee, Hamid R. Sheikh, Michael O. Polley
  • Patent number: 12192673
    Abstract: A method includes obtaining multiple video frames. The method also includes determining whether a bi-directional optical flow between the multiple video frames satisfies an image quality criterion for bi-directional consistency. The method further includes identifying a non-linear curve based on pixel coordinate values from at least two of the video frames. The at least two video frames include first and second video frames. The method also includes generating interpolated video frames between the first and second video frames by applying non-linear interpolation based on the non-linear curve. In addition, the method includes outputting the interpolated video frames for presentation.
    Type: Grant
    Filed: February 2, 2022
    Date of Patent: January 7, 2025
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Gyeongmin Choe, Yingmao Li, John Seokjun Lee, Hamid R. Sheikh, Michael O. Polley
  • Patent number: 12165351
    Abstract: A processor-implemented method with pose estimation includes: tracking a position of a feature point extracted from image information comprising a plurality of image frames, the image information being received from an image sensor; predicting a current state variable of an estimation model for determining a pose of an electronic device, based on motion information received from a motion sensor; determining noise due to an uncertainty of the estimation model based on a residual between a first position of the feature point extracted from the image frames and a second position of the feature point predicted based on the current state variable; updating the current state variable based on the current state variable, the tracked position of the feature point, and the noise; and determining the pose of the electronic device based on the updated current state variable.
    Type: Grant
    Filed: January 26, 2022
    Date of Patent: December 10, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Hojin Ju, Yun-Tae Kim, Donghoon Sagong, Jaehwan Pi
  • Patent number: 12165432
    Abstract: This application discloses a secure face image transmission method performed by an electronic device. In this application, when a camera component of the electronic device acquires any face image, face image information can be read from a buffer. The face image information is used for indicating a quantity of all face images historically acquired by the camera component. Identification information is embedded in the face image to obtain the face image carrying the identification information. The identification information is used for indicating the face image information but not perceivable by a human being. The face image carrying the identification information is transmitted to a remote server for authenticating the face image using the identification information. Therefore, the security of the face image acquired by the camera component is improved, and the security of the face image transmission process is effectively ensured.
    Type: Grant
    Filed: November 16, 2021
    Date of Patent: December 10, 2024
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Shaoming Wang, Zhijun Geng, Jun Zhou, Runzeng Guo
  • Patent number: 12154301
    Abstract: In some examples, a computerized system for analyzing images comprises at least one programmable processor and a machine-readable medium having instructions stored thereon which, when executed by the at least one programmable processor, cause the at least one programmable processor to execute operations comprising training an autoencoder using a plurality of image model training samples, the autoencoder comprising a plurality of interconnected layers and combined instances of neural networks, passing input data into a trained autoencoder model, the input data including at least one pixel image, encoding the input data into a compressed version of the input data, and decoding the compressed version of the input data to generate to create an output, the output including a sparse reconstruction of the input data, the output including a predicted pixel image label or score.
    Type: Grant
    Filed: October 25, 2021
    Date of Patent: November 26, 2024
    Assignee: Zeta Global Corp.
    Inventor: Danny Portman
  • Patent number: 12154340
    Abstract: A system is provided for performing a validation of an examination environment. The system acquires a video of the examination environment. The system applies one or more machine learning models to images (frames) of the video to indicate whether the image includes a prohibited item. A machine learning model may be trained using images of items labeled with an indication of whether an image includes a prohibited item. The system determines whether the validation has passed based on whether an image includes a prohibited item. The system notifies a proctor of when the validation has not passed and provides to the proctor an indication of an image that contains a prohibited item. The proctor then decides whether the validation should pass or fail.
    Type: Grant
    Filed: November 30, 2021
    Date of Patent: November 26, 2024
    Assignee: Vaital
    Inventors: David Yunger, Matthew Bartels, Garegin Petrosyan, Artak Chopuryan
  • Patent number: 12148173
    Abstract: An object tracking apparatus (2000) detects an object from video data (12) and performs tracking processing of the detected object. The object tracking apparatus (2000) detects that a predetermined condition is satisfied for a first object (20) and a second object (30), using the video data (12). Thereafter, the object tracking apparatus (2000) performs tracking processing of the second object (30) in association with the first object (20).
    Type: Grant
    Filed: March 27, 2019
    Date of Patent: November 19, 2024
    Assignee: NEC CORPORATION
    Inventors: Daiki Yamazaki, Satoshi Terasawa
  • Patent number: 12148240
    Abstract: A face authentication apparatus includes a face image acquisition unit that acquires, as a first face image, a face image of a user who moves from a first area to a second area via a gate provided between the first area and the second area, a collation unit that performs face authentication, a flow rate measurement unit that measures a flow rate of users who move from the first area to the second area via the gate and a flow rate of users who move from the second area to the first area via the gate, and a security level determination unit that determines the security level of the first area and the security level of the second area on the basis of the flow rates measured.
    Type: Grant
    Filed: June 24, 2022
    Date of Patent: November 19, 2024
    Assignee: NEC CORPORATION
    Inventors: Taketo Kochi, Kenji Saito
  • Patent number: 12142062
    Abstract: According to one embodiment, a reading system includes a processing device. The processing device includes an extractor, a line thinner, a setter, and an identifier. The extractor extracts a partial image from an input image. A character of a segment display is imaged in the partial image. The segment display includes a plurality of segments. The line thinner thins a cluster of pixels representing a character in the partial image. The setter sets, in the partial image, a plurality of determination regions corresponding respectively to the plurality of segments. The identifier detects a number of pixels included in the thinned cluster for each of the plurality of determination regions, and identifies the character based on a detection result.
    Type: Grant
    Filed: March 10, 2021
    Date of Patent: November 12, 2024
    Assignee: KABUSHIKI KAISHA TOSHIBA
    Inventors: Toshikazu Taki, Tsubasa Kusaka
  • Patent number: 12099577
    Abstract: An object recognition method is provided. The method includes: detecting an occlusion region of an object in an image, to obtain a binary image; obtaining occlusion binary image blocks; querying a mapping relationship between occlusion binary image blocks and binary masks included in a binary mask dictionary to obtain binary masks corresponding to the occlusion binary image blocks; synthesizing the binary masks queried based on each of the occlusion binary image blocks, to obtain a binary mask corresponding to the binary image; and determining a matching relationship between the image and a prestored object image, based on the binary mask corresponding to the binary image, a feature of the prestored object image, and a feature of the to-be-recognized image.
    Type: Grant
    Filed: November 5, 2021
    Date of Patent: September 24, 2024
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Lingxue Song, Dihong Gong, Zhifeng Li, Wei Liu
  • Patent number: 12101491
    Abstract: An image encoding method including: a constraint information generating step of generating tile constraint information indicating whether or not there is a constraint in filtering on boundaries between adjacent tiles among a plurality of tiles obtained by dividing a picture, and storing the tile constraint information into a sequence parameter set; and a filter information generating step of generating, for each of the boundaries, one of a plurality of filter information items respectively indicating whether or not filtering is executed on the boundaries, and storing the plurality of filter information items into a plurality of picture parameter sets, wherein, in the filter information generating step, the plurality of filter information items which indicate identical content are generated when the tile constraint information indicates that there is the constraint in the filtering.
    Type: Grant
    Filed: August 14, 2023
    Date of Patent: September 24, 2024
    Assignee: SUN PATENT TRUST
    Inventors: Hisao Sasai, Takahiro Nishi, Youji Shibahara, Toshiyasu Sugio, Kyoko Tanikawa, Toru Matsunobu, Kengo Terada
  • Patent number: 12087027
    Abstract: In an object recognition apparatus, a storage unit stores a table in that a plurality of feature amounts are associated with each object having feature points of respective feature amounts. An object region detection unit detects object regions of a plurality of objects from an input image. A feature amount extraction unit extracts feature amounts of feature points from the input image. A refining unit refers to the table, and refines from all objects of recognition subjects to object candidates corresponding to the object regions based on feature amounts of feature points belonging to the object regions. A matching unit recognizes the plurality of objects by matching the feature points belonging to each of the object regions with feature points for each of the object candidates, and outputs a recognition result.
    Type: Grant
    Filed: January 31, 2020
    Date of Patent: September 10, 2024
    Assignee: NEC CORPORATION
    Inventors: Yu Nabeto, Soma Shiraishi
  • Patent number: 12080014
    Abstract: A method of generating a correction plan for a knee of a patient includes obtaining a ratio of reference bone density to reference ligament tension in a reference population. A bone of the knee of the patient may be imaged. From the image of the bone, a first dataset may be determined including at least one site of ligament attachment and existing dwell points of a medial femoral condyle and lateral femoral condyle of the patient on a tibia of the patient. Desired positions of contact in three dimensions of the femoral condyles of the patient with the tibia of the patient may be obtained by determining a relationship in which a ratio of bone density to ligament tension of the patient is substantially equal to the ratio of reference bone density to reference ligament tension.
    Type: Grant
    Filed: September 27, 2022
    Date of Patent: September 3, 2024
    Assignee: Howmedica Osteonics Corp.
    Inventors: Gokce Yildirim, Sally Liarno, Mark Gruczynski