Patents Examined by Wesley J Tucker
-
Patent number: 12190472Abstract: A video enhancement method and apparatus, an electronic device, and a storage medium are described. The method comprises: extracting features from M frames of images, so as to obtain at least one first-scale image feature (S310); for each first-scale image feature, performing N-level down-sampling processing on the first-scale image feature, so as to obtain a second-scale image feature (S320); performing N-level up-sampling processing on the second-scale image feature, so as to obtain a third-scale image feature (S330), wherein the input of ith-level up-sampling processing is an image feature obtained after performing superimposition processing on the output of (N+1?i)th-level down-sampling processing and the output of (i?1)th-level up-sampling processing, and the multiple of jth-level up-sampling is the same as the multiple of (N+1?j)th-level down-sampling; and performing superimposition processing on the third-scale image feature and the first-scale image feature.Type: GrantFiled: March 10, 2021Date of Patent: January 7, 2025Assignee: BOE TECHNOLOGY GROUP CO., LTD.Inventors: Dan Zhu, Ran Duan, Guannan Chen
-
Patent number: 12189734Abstract: The present disclosure relates to security control methods and systems. The security control system may include at least one storage device storing a set of instructions; and one or more processors in communication with the at least one storage device, wherein when executing the set of instructions, the one or more processors are configured to direct the system to: obtain first data from a first device; obtain second data from a second device; associate and process the first data and/or the second data; and send the processed first data and second data to a server and/or a user terminal. The present disclosure can achieve linkage control of a plurality of access controls to meet the needs of users on indoor security.Type: GrantFiled: April 30, 2021Date of Patent: January 7, 2025Assignee: YUNDING NETWORK TECHNOLOGY (BEIJING) CO., LTD.Inventors: Yushan Yang, Weiliang Chen, Jian Long, Tao Li, Dasheng Liu, Qi Yi, Haibo Yu
-
Patent number: 12182722Abstract: A compact generative neural network can be distilled from a teacher generative neural network using a training network. The compact network can be trained on the input data and output data of the teacher network. The training network train the student network using a discrimination layer and one or more types of losses, such as perception loss and adversarial loss.Type: GrantFiled: June 22, 2023Date of Patent: December 31, 2024Assignee: Snap Inc.Inventors: Sergey Tulyakov, Sergei Korolev, Aleksei Stoliar, Maksim Gusarov, Sergei Kotcur, Christopher Yale Crutchfield, Andrew Wan
-
Patent number: 12183068Abstract: Computing platforms and methods are disclosed for verification of cleaning services to be performed in an area to be cleaned. Exemplary implementations may: identify, from among a plurality of cleaning regions relating to the area to be cleaned, a group of designated cleaning regions; randomly generate, with respect to the plurality of cleaning regions, a set of inspection regions; provide a user interface to prompt a user to capture an image of each inspection region; directly capture the images of each inspection region via the user interface; compare the captured images of each inspection region with corresponding reference images to determine whether each of the inspection regions has been properly cleaned; and output an inspection result based on the comparison of the captured images with the corresponding reference images. Implementations provide a method for virtually supervising otherwise unsupervised workers, which increases accountability and generates unprecedented visibility.Type: GrantFiled: November 4, 2021Date of Patent: December 31, 2024Assignee: Modern Cleaning Concept L.P.Inventors: Alejandro Bremer Sada, Daniel Eric Wolfe, Jason Arthur Prizant, Bram Mitchell Lesser, Rajiv Uttamchandani, Christopher Manitt, Claire Robbins, Avi Steinberg
-
Patent number: 12182977Abstract: An image processing method is applied to an image display device, the image display device includes a lens having distortion coefficients and a display screen. The image processing method includes: dividing a display region of the display screen into image regions according to the distortion coefficients of the lens, an outer boundary line of each image region enclosing a polygon, a geometric center of the polygon enclosed by the outer boundary line of the image region coinciding with a geometric center of the lens, and distortion coefficients of positions of the lens on which vertexes of the polygon enclosed by the outer boundary line of the image region are mapped being the same; and performing anti-distortion processing on coordinates of vertexes of the image region according to a distortion coefficient corresponding to the vertexes of the image region to obtain texture coordinates of the vertexes of the image region.Type: GrantFiled: April 12, 2021Date of Patent: December 31, 2024Assignees: BEIJING BOE OPTOELECTRONICS TECHNOLOGY CO., LTD., BOE TECHNOLOGY GROUP CO., LTD.Inventors: Qingwen Fan, Longhui Wang, Jinghua Miao, Shuai Hao, Huidong He, Shuo Zhang, Wenyu Li, Lili Chen, Hao Zhang
-
Patent number: 12171387Abstract: A dishwasher comprises an image sensor, first and second drawers, first and second actuators configured to respectively extend or retract the first and second drawers, and a computer system. The computer system is configured to receive images from the image sensor, determine whether an object is within a threshold distance, and in response to determining that an object is moved to within the threshold distance, use a washwear recognition system to determine whether the object comprises an washwear item. A determined washwear type and a washwear dimension are used to identify an unoccupied location in of a size and configured suitable to receive the washwear item. In response to identifying an unoccupied location of a size and configured suitable to receive the washwear item, an actuator among the first actuator and the second actuator is caused to extend a corresponding drawer to thereby receive the item of dishware.Type: GrantFiled: June 20, 2022Date of Patent: December 24, 2024Assignee: Idealab Studio, LLCInventor: William Tod Gross
-
Patent number: 12169974Abstract: High resolution object detection systems and methods provide accurate, real-time, one-stage processing, and include a backbone network configured to receive an input image and generate multi-scale feature representations, a feature fusion block configured to fuse the multi-sale feature representations, a plurality of representation transfer modules configured to isolate and decouple sub-task networks and the multi-scale feature representations, and a cascade refinement module configured to process each representation transfer module output to refine predictions. The backbone network generates a plurality of image features corresponding to each of a plurality of image scales and includes a plurality of convolutional layers and a stem block after the first convolutional layer, wherein the stem block improves feature extraction performance. The feature fusion block generates feature outputs for each of a plurality of image scales.Type: GrantFiled: July 13, 2021Date of Patent: December 17, 2024Assignee: FLIR Unmanned Aerial Systems ULCInventor: Jun Wang
-
Patent number: 12165357Abstract: Provided is a process for generating specifications for lenses of eyewear based on locations of extents of the eyewear determined through a pupil location determination process. Some embodiments capture an image and determine, using computer vision image recognition functionality, the pupil locations of a human's eyes based on the captured image depicting the human wearing eyewear.Type: GrantFiled: May 9, 2023Date of Patent: December 10, 2024Assignee: Electric Avenue Software, Inc.Inventors: David Barton, Ethan D. Joffe
-
Patent number: 12159474Abstract: In one aspect, a computerized method useful for dynamic location-based virtualized mail services includes the step of determining an identity of a user receiving a physical mail. The method includes the step of determining a location of the user. The method includes the step of determining a set of delivery locations within a specified distance of the user's current location. The method includes the step of communicating, via an electronic message, the delivery location to the user's mobile device.Type: GrantFiled: April 19, 2021Date of Patent: December 3, 2024Inventor: Hasan Mirjan
-
Patent number: 12154197Abstract: Examples are disclosed herein that relate to detecting product requirements within a digitized document. One example provides a method comprising: identifying a first page as a summary page, the first page comprising a keyword that refers to a second page; and detecting in the first page a first instance of a pattern comprising a first text block adjacent to a first line. A first part name and a first requirement for a first part are extracted from the first text block. In the first page, a second instance of the pattern is detected comprising a second text block adjacent to a second line. The keyword and a second part name are extracted from the second text block. The second part name and a second requirement for a second part are extracted from the second page. The first requirement and the second requirement are output for storage in a data store.Type: GrantFiled: December 21, 2021Date of Patent: November 26, 2024Assignee: The Boeing CompanyInventors: Ahmad R. Yaghoobi, Krishna P. Srinivasmurthy, Temourshah Ahmady
-
Patent number: 12148175Abstract: A method includes obtaining a first optical flow vector representing motion between consecutive video frames during a previous time step. The method also includes generating a first predicted optical flow vector from the first optical flow vector using a trained prediction model, where the first predicted optical flow vector represents predicted motion during a current time step. The method further includes refining the first predicted optical flow vector using a trained update model to generate a second optical flow vector representing motion during the current time step. The trained update model uses the first predicted optical flow vector, a video frame of the previous time step, and a video frame of the current time step to generate the second optical flow vector.Type: GrantFiled: February 2, 2022Date of Patent: November 19, 2024Assignee: Samsung Electronics Co., Ltd.Inventors: Yingmao Li, Chenchi Luo, Gyeongmin Choe, John Seokjun Lee
-
Patent number: 12141955Abstract: A processor determines a cause of an image defect based on a test image that is obtained through an image reading process performed on an output sheet output from an image forming device. The processor generates an extraction image by extracting, from the test image, a noise point that is a type of image defect. Furthermore, the processor determines a cause of the noise point by using at least one of, in the extraction image: an edge strength of the noise point; a degree of flatness of the noise point; and a pixel value distribution of a transverse pixel sequence that is a pixel sequence traversing the noise point.Type: GrantFiled: December 21, 2021Date of Patent: November 12, 2024Assignee: KYOCERA Document Solutions Inc.Inventors: Kazunori Tanaka, Kanako Morimoto, Takuya Miyamoto, Koji Sato, Rui Hamabe
-
Patent number: 12142282Abstract: Systems, methods, and computer program products are disclosed for removing noise from facial skin micromovement signals. Removing noise from facial skin micromovements includes, during a time period when an individual is involved in at least one non-speech-related physical activity, operating a light source in a manner enabling illumination of a facial skin region of the individual; receiving signals representing light reflections from the facial skin region; analyzing the received signals to identify a first reflection component indicative of prevocalization facial skin micromovements and a second reflection component associated with the at least one non-speech-related physical activity; and filtering out the second reflection component to enable interpretation of words from the first reflection component indicative of the prevocalization facial skin micromovements.Type: GrantFiled: November 9, 2023Date of Patent: November 12, 2024Assignee: Q (Cue) Ltd.Inventors: Aviad Maizels, Yonatan Wexler, Avi Barliya
-
Patent number: 12137303Abstract: A method includes capturing a first image associated with a portion of a display screen being shared. The method further includes rendering the first image in a preview window of the display screen being shared to form a second image. The second image is captured so as to determine whether the first image is duplicated in the second image. The duplication of the first image in the second image is masked to form a third image. The third image is rendered in the preview window.Type: GrantFiled: June 23, 2023Date of Patent: November 5, 2024Assignee: RingCentral, Inc.Inventor: Aleksei Petrov
-
Patent number: 12136243Abstract: At least one embodiment relates to a method assigning a pixel value of an occupancy map either indicates that a depth value of at least one 3D sample of a point cloud frame projected along a same projection line is stored as a pixel value of at least one layer or equals a fixed-length codeword representing a depth value of at least one 3D sample projection along said projection line.Type: GrantFiled: January 27, 2020Date of Patent: November 5, 2024Assignee: INTERDIGITAL VC HOLDINGS, INC.Inventors: Joan Llach, Celine Guede, Jean-Claude Chevet
-
Patent number: 12122392Abstract: State information can be determined for a subject that is robust to different inputs or conditions. For drowsiness, facial landmarks can be determined from captured image data and used to determine a set of blink parameters. These parameters can be used, such as with a temporal network, to estimate a state (e.g., drowsiness) of the subject. To improve robustness, an eye state determination network can determine eye state from the image data, without reliance on intermediate landmarks, that can be used, such as with another temporal network, to estimate the state of the subject. A weighted combination of these values can be used to determine an overall state of the subject. To improve accuracy, individual behavior patterns and context information can be utilized to account for variations in the data due to subject variation or current context rather than changes in state.Type: GrantFiled: August 24, 2021Date of Patent: October 22, 2024Assignee: Nvidia CorporationInventors: Yuzhuo Ren, Niranjan Avadhanam
-
Patent number: 12125222Abstract: Systems, methods, models, and training data for models are discussed, for determining vehicle positioning, and in particular identifying tailgating. Simulated training images showing vehicles following other vehicles, under various conditions, are generated using a virtual environment. Models are trained to determine following distance between two vehicles. Trained models are used in detection of tailgating, based on determined distance between two vehicles. Results of tailgating are output to warn a driver, or to provide a report on driver behavior. Following distance over time is determined, and simplified following distance data is generated for use at a management device.Type: GrantFiled: March 28, 2024Date of Patent: October 22, 2024Assignee: Geotab Inc.Inventors: Cristian Florin Ivascu, Joy Mazumder, Shashank Saurav, Javed Siddique, Mohammed Sohail Siddique, Donghao Qiao
-
Patent number: 12112538Abstract: A computer-implemented method for classifying video data with improved accuracy includes obtaining, by a computing system comprising one or more computing devices, video data comprising a plurality of video frames; extracting, by the computing system, a plurality of video tokens from the video data, the plurality of video tokens comprising a representation of spatiotemporal information in the video data; providing, by the computing system, the plurality of video tokens as input to a video understanding model, the video understanding model comprising a video transformer encoder model; and receiving, by the computing system, a classification output from the video understanding model.Type: GrantFiled: July 8, 2021Date of Patent: October 8, 2024Assignee: GOOGLE LLCInventors: Anurag Arnab, Mostafa Dehghani, Georg Heigold, Chen Sun, Mario Lucic, Cordelia Luise Schmid
-
Patent number: 12106510Abstract: Disclosed is a computer-implemented method for detecting one or more anatomic landmarks in medical image data. In an embodiment, the method includes receiving a medical image data set depicting a body part of a patient; and determining a first set of anatomic landmarks from a first representation of the medical image data set at a first resolution by applying a first trained function to the first representation of the medical image data set. Based on that, a second set of anatomic landmarks is determined from a second representation of the medical image data set at a second resolution, the second resolution being higher than the first resolution, by applying a second trained function different than the first trained function to the second representation of the medical image data set and using the first set of landmarks by the second trained function.Type: GrantFiled: March 3, 2021Date of Patent: October 1, 2024Assignee: SIEMENS HEALTHINEERS AGInventors: Parmeet Bhatia, Yimo Guo, Gerardo Hermosillo Valadez, Zhigang Peng, Yu Zhao
-
Patent number: 12100154Abstract: A medical image processing apparatus including an obtaining unit configured to obtain a tomographic image of an eye to be examined, and a first processing unit configured to perform first detection processing for detecting at least one layer of a plurality of layers in the obtained tomographic image, by using the obtained tomographic image as an input data of a learned model, wherein the learned model has been obtained by using training data including data indicating at least one layer of a plurality of layers in a tomographic image of an eye to be examined.Type: GrantFiled: February 5, 2021Date of Patent: September 24, 2024Assignee: CANON KABUSHIKI KAISHAInventors: Yoshihiko Iwase, Hideaki Mizobe, Ritsuya Tomita