Patents Issued in May 7, 2024
-
Patent number: 11978228Abstract: Embodiments for determining an orientation of a scanned image. In an exemplary embodiment, a method comprises receiving image data of an image of one or more objects. Each object of the one or more objects in the image includes a plurality of points on a surface of the object. The method further comprises generating a plurality of subsets of points of the plurality of points and fitting a parametric model to more than one subset of the plurality of subsets to generate a plurality of parametric models. Further, the method identifies a parametric model of the plurality of parametric models that includes the largest number of points and orients the image based on the parametric model that includes the largest number of points.Type: GrantFiled: November 21, 2019Date of Patent: May 7, 2024Assignee: KI MOBILITY LLCInventors: Douglas H. Munsey, Jr., Daniel H. Packard, Thomas J. Whelan, Patrick Abadi, William G. N. Coon, Robin D. Knight, Kimberly M. Wheeler
-
Patent number: 11978229Abstract: Systems and methods for three-dimensional (3D) localization of an object, including: a processor, a camera, in communication with the processor, and an X-Ray system, coupled to the camera such that a focal point of the camera is aligned with the source of the X-Ray system, where the X-Ray system is directed towards the object, and wherein the processor is configured to determine 3D localization of the object based on a combination of images received from the camera and from the X-Ray system.Type: GrantFiled: May 4, 2023Date of Patent: May 7, 2024Assignee: Vidisco Ltd.Inventors: Alon Fleider, Ohad Milo
-
Patent number: 11978230Abstract: An aerial object position determination system including an acoustic detection module comprising a plurality of microphones positioned about a central axis of the first unit; a computer vision module comprising multiple cameras positioned about the central axis of the first unit; an automatic dependent surveillance-broadcast (ADS-B) receiver provided with the first unit. One or more processors are configured to receive data from the acoustic detection module, the computer vision module and the ADS-B receiver. Based on the received data, the one or more processors determine a position of an aerial object. The determined position of the aerial object is transmitted to a receiving device.Type: GrantFiled: August 8, 2023Date of Patent: May 7, 2024Assignee: Birdstop, Inc.Inventors: Keith Miao, Robert Reynoso, Jatin Kolekar, Timothy McPhail
-
Patent number: 11978231Abstract: A wrinkle detection method includes: obtaining an original image, where the original image includes a face; adjusting a size of an ROI region on the original image to obtain at least two ROI images of different sizes, where the ROI region is a region in which a wrinkle on the face is located. A terminal device processes all the at least two ROI images of different sizes to obtain at least two binary images, where a white region in each binary image is a region in which a wrinkle is suspected to appear. The terminal device fuses the at least two binary images to obtain a final image, where a white region on the final image is recognized as a wrinkle.Type: GrantFiled: November 13, 2019Date of Patent: May 7, 2024Assignee: Honor Device Co., Ltd.Inventors: Hongwei Hu, Chen Dong, Xin Ding, Wenmei Gao
-
Patent number: 11978232Abstract: A method of displaying 3-dimensional (3D) augmented reality includes transmitting a first image generated by photographing a target object at a first time point, and storing first view data at the first time point; receiving first relative pose data of the target object; estimating pose data of the target object, based on the first view data and the first relative pose data of the target object; generating a second image by photographing the target object at a second time point, and generating second view data at the second time point; estimating second relative pose data of the target object, based on the pose data of the target object and the second view data; rendering a 3D image of a virtual object, based on the second relative pose data of the target object; and generating an augmented image by augmenting the 3D image of the virtual object on the second image.Type: GrantFiled: March 28, 2022Date of Patent: May 7, 2024Assignee: NAVER LABS CORPORATIONInventors: Yeong Ho Jeong, Dong Cheol Hur, Sang Wook Kim
-
Patent number: 11978233Abstract: A system and method include receiving target image data associated with a target coating. A feature extraction analysis process is applied to the target image data to determine a target image feature. The feature extraction analysis process includes dividing the target image into sub-images which contains a plurality of target pixels. A machine learning model identifies one or more types of flakes present in the target coating using target pixel features.Type: GrantFiled: December 16, 2020Date of Patent: May 7, 2024Assignee: AXALTA COATING SYSTEMS IP CO., LLCInventors: Larry E. Steenhoek, Dominic V. Poerio
-
Patent number: 11978234Abstract: A method and apparatus for processing color data includes storing fragment pointer and color data together in a color buffer. A delta color compression (DCC) key indicating the color data to fetch for processing is stored, and the fragment pointer and color data is fetched based upon the read DCC key for decompression.Type: GrantFiled: December 28, 2020Date of Patent: May 7, 2024Assignee: Advanced Micro Devices, Inc.Inventors: Pazhani Pillai, Mark A. Natale, Harish Kumar Kovalam Rajendran
-
Patent number: 11978236Abstract: State of art techniques performing image labeling of remotely sensed data are computation intensive, consume time and resources. A method and system for efficient retrieval of a target in an image in a collection of remotely sensed data is disclosed. Image scanning is performed efficiently, wherein only a small percentage of pixels from the entire image are scanned to identify the target. One or more samples are intelligently identified based on sample selection criteria and are scanned for detecting presence of the target based on cumulative evidence score Plurality of sampling approaches comprising active sampling, distributed sampling and hybrid sampling are disclosed that either detect and localize the target or perform image labeling indicating only presence of the target.Type: GrantFiled: October 28, 2021Date of Patent: May 7, 2024Assignee: Tata Consultancy Limited ServicesInventors: Shailesh Shankar Deshpande, Balamuralidhar Purushothaman
-
Patent number: 11978237Abstract: Speed of first work is compared with speed of second work based on a first working period when a worker is caused to perform the first work of setting annotation data to first image data and a second working period when the worker is caused to perform the second work of correcting advance annotation data set based on a recognition result obtained by causing a predetermined recognizer to recognize the first image data, and, in a case where the first work is faster than the second work, the worker is requested to correct second image data in which advance annotation data is not set, while, in a case where the second work is faster than the first work, the worker is requested to correct advance annotation data set based on a recognition result obtained by causing the recognizer to recognize the second image data.Type: GrantFiled: August 13, 2020Date of Patent: May 7, 2024Assignee: PANASONIC INTELLECTUAL PROPERTY CORPORATION OF AMERICAInventor: Toru Tanigawa
-
Patent number: 11978238Abstract: Described herein are systems and methods of converting media dimensions. A device may identify a set of frames from a video in a first orientation as belonging to a scene. The device may receive a selected coordinate on a frame of the set of frames for the scene. The device may identify a first region within the frame including a first feature corresponding to the selected coordinate and a second region within the frame including a second feature. The device may generate a first score for the first feature and a second score for the second feature. The first score may be greater than the second score based on the first feature corresponding to the selected coordinate. The device may crop the frame to include the first region and the second region within a predetermined display area comprising a subset of regions of the frame in a second orientation.Type: GrantFiled: March 13, 2023Date of Patent: May 7, 2024Assignee: GOOGLE LLCInventors: Brian Mulford, Nathan Frey, Alexandros Panagopoulos, Yinquan Hao, Yuan Zhang
-
Patent number: 11978239Abstract: The disclosure provides a target detection method and apparatus, a model training method and apparatus, a device, and a storage medium. The target detection method includes: obtaining a first image; obtaining a second image corresponding to the first image, the second image belonging to a second domain; and obtaining a detection result corresponding to the second image through a cross-domain image detection model, the detection result including target localization information and target class information of a target object, the cross-domain image detection model including a first network model configured to convert an image from a first domain into an image in the second domain, and the second network model configured to perform region localization on the image in the second domain.Type: GrantFiled: July 14, 2023Date of Patent: May 7, 2024Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventor: Ze Qun Jie
-
Patent number: 11978240Abstract: According to one embodiment, an information processing device includes acquisition means, detection means, and specifying means. The acquisition means acquires image data according to a lost article to which a symbol of a user is attached and identification information according to a group to which the user belongs. The detection means performs analysis set for each item of identification information according to the group with respect to the image data according to the lost article and detects the identification information for identifying the user included in the symbol. The specifying means specifies the user from the identification information detected by the detection means.Type: GrantFiled: May 20, 2021Date of Patent: May 7, 2024Assignee: TOSHIBA TEC KABUSHIKI KAISHAInventors: Tetsuya Nobuoka, Yuishi Takeno, Natsuko Fujii
-
Patent number: 11978241Abstract: Embodiments of the disclosure provide an image processing method and apparatus, a computer-readable medium, and an electronic device. The image processing method includes: extracting a feature map of a target image; dividing the feature map into target regions; determining weights of the target regions according to feature vectors of the target regions; and generating a feature vector of the target image according to the weights of the target regions and the feature vectors of the target regions.Type: GrantFiled: June 21, 2021Date of Patent: May 7, 2024Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LTDInventors: Kun Jin, Shi Jie Zhao, Yang Yi, Feng Li, Xiao Xiang Zuo
-
Patent number: 11978242Abstract: There is described a deep learning supervised regression based model including methods and systems for facial attribute prediction and use thereof. An example of use is an augmented and/or virtual reality interface to provide a modified image responsive to facial attribute predictions determined from the image. Facial effects matching facial attributes are selected to be applied in the interface.Type: GrantFiled: June 29, 2021Date of Patent: May 7, 2024Assignee: L'OrealInventors: Zhi Yu, Yuze Zhang, Ruowei Jiang, Jeffrey Houghton, Parham Aarabi, Frederic Antoinin Raymond Serge Flament
-
Patent number: 11978243Abstract: One embodiment provides a system that facilitates efficient collection of training data. During operation, the system obtains, by a recording device, a first image of a physical object in a scene which is associated with a three-dimensional (3D) world coordinate frame. The system marks, on the first image, a plurality of vertices associated with the physical object, wherein a vertex has 3D coordinates based on the 3D world coordinate frame. The system obtains a plurality of second images of the physical object in the scene while changing one or more characteristics of the scene. The system projects the marked vertices on to a respective second image to indicate a two-dimensional (2D) bounding area associated with the physical object.Type: GrantFiled: November 16, 2021Date of Patent: May 7, 2024Assignee: Xerox CorporationInventors: Matthew A. Shreve, Sricharan Kallur Palli Kumar, Jin Sun, Gaurang R. Gavai, Robert R. Price, Hoda M. A. Eldardiry
-
Patent number: 11978244Abstract: Provided is an atypical environment-based location recognition apparatus. The apparatus includes a sensing information acquisition unit configured to, from sensing data collected by sensor modules, detect object location information and semantic label information of a video image and detect an event in the video image; a walk navigation information provision unit configured to acquire user movement information; a metric map generation module configured to generate a video odometric map using sensing data collected through a sensing information acquisition unit and reflect the semantic label information; and a topology map generation module configured to generate a topology node using sensing data acquired through the sensing information acquisition unit and update the topology node through the collected user movement information.Type: GrantFiled: December 14, 2021Date of Patent: May 7, 2024Assignee: Electronics and Telecommunications Research InstituteInventors: So Yeon Lee, Blagovest Iordanov Vladimirov, Sang Joon Park, Jin Hee Son, Chang Eun Lee, Sung Woo Jun, Eun Young Cho
-
Patent number: 11978245Abstract: The present disclosure discloses a method and apparatus for generating an image. A specific embodiment of the method comprises: acquiring at least two frames of facial images extracted from a target video; and inputting the at least two frames of facial images into a pre-trained generative model to generate a single facial image. The generative model updates a model parameter using a loss function in a training process, and the loss function is determined based on a probability of the single facial generative image being a real facial image and a similarity between the single facial generative image and a standard facial image. According to this embodiment, authenticity of the single facial image generated by the generative model may be enhanced, and then a quality of a facial image obtained based on the video is improved.Type: GrantFiled: July 31, 2018Date of Patent: May 7, 2024Assignee: BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) CO., LTD.Inventors: Tao He, Gang Zhang, Jingtuo Liu
-
Patent number: 11978246Abstract: Provided is a method for implementing reinforcement learning by a neural network. The method may include performing, for each epoch of a first predetermined number of epochs, a second predetermined number of training iterations and a third predetermined number of testing iterations using a first neural network. The first neural network may include a first set of parameters, the training iterations may include a first set of hyperparameters, and the testing iterations may include a second set of hyperparameters. The testing iterations may be divided into segments, and each segment may include a fourth predetermined number of testing iterations. A first pattern may be determined based on at least one of the segments. At least one of the first set of hyperparameters or the second set of hyperparameters may be adjusted based on the pattern. A system and computer program product are also disclosed.Type: GrantFiled: January 3, 2023Date of Patent: May 7, 2024Assignee: Visa International Service AssociationInventors: Liang Gou, Hao Yang, Wei Zhang
-
Patent number: 11978247Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for receiving multiple images from a camera, each image of the multiple images representative of a detection of an object within the image. For each image of the multiple images the methods include: determining a set of detected objects within the image, each object defined by a respective bounding box, and determining, from the set of detected objects within the image and ground truth labels, a false detection of a first object. The methods further include determining that a target object threshold is met based on a number of false detections of the first object in the multiple images, generating, based on the number of false detections for the first object meeting the target object threshold, an adversarial mask for the first object, and providing, to the camera, the adversarial mask.Type: GrantFiled: November 2, 2021Date of Patent: May 7, 2024Assignee: ObjectVideo Labs, LLCInventors: Allison Beach, Gang Qian, Eduardo Romera Carmena
-
Patent number: 11978248Abstract: Implementations disclosed herein provide systems and methods that match a current relationship model associated with a user's current environment to a prior relationship model for a prior environment to determine that the user is in the same environment. The current relationship model is compared with the prior relationship model based on matching characteristics of the current relationship model with characteristics of the prior relationship model.Type: GrantFiled: March 17, 2023Date of Patent: May 7, 2024Assignee: Apple Inc.Inventors: Angela Blechschmidt, Alexander S. Polichroniadis, Daniel Ulbricht
-
Patent number: 11978249Abstract: A computer-implemented method for identifying features of interest in a data image. The method includes identifying data variations in a data image or set of data images, each data image comprising rendered data, identifying one or more features of interest in the data image or set of data images based on the identified data variations, identifying a feature of interest genus corresponding to each identified feature of interest, reclassifying the rendered data based on each of the identified features of interest genuses so as to eliminate background data in the rendered data thereby producing an eliminated background dataset, and generating a feature of interest map for each identified feature of interest genus. A machine learning method, including a training phase, for automatically identifying features of interest in a data image is further provided.Type: GrantFiled: August 24, 2019Date of Patent: May 7, 2024Assignee: Fugro N.V.Inventors: Christine Devine, William Haneberg
-
Patent number: 11978250Abstract: In an approach for determining the date of planting of a crop growing in an agricultural field, a processor receives an aerial image of one or more agricultural fields in a pre-determined geographical region. A processor selects a plurality of points from the aerial image. A processor calculates a Vegetation Index of one or more crops growing at the plurality of points selected. A processor compares the Vegetation Index calculated for the one or more crops growing at the plurality of points selected to the Vegetation Index known for a plurality of historical reference signatures. A processor generates an actual signature. A processor cross-correlates the actual signature against the plurality of historical reference signatures to measure a degree of similarity. A processor identifies the one or more crops growing in the one or more agricultural fields in the pre-determined geographical region from the cross-correlation.Type: GrantFiled: June 2, 2021Date of Patent: May 7, 2024Assignee: International Business Machines CorporationInventors: Charles Daniel Wolfson, Kevin Brown, David Alec Selby, Hamish C. Hunt
-
Patent number: 11978251Abstract: This invention relates to methods for determining adoption and impact of regenerative farming practices. Embodiments of these methods, take satellite imagery and weather data as inputs, process those data according to methods of the present invention, and produce outputs which indicate whether a specific farming practice (for example, no-till or cover cropping) was adopted for a particular field or region for a particular season.Type: GrantFiled: January 19, 2023Date of Patent: May 7, 2024Assignee: INDIGO AG, INC.Inventors: Eli Kellen Melaas, Bobby Harold Braswell, Douglas Kane Bolton
-
Patent number: 11978252Abstract: A communication system includes circuitry. The circuitry receives an input of language information. The circuitry performs recognition on the input language information. The circuitry displays one or more images corresponding to the input language information on a display, based on a result of the recognition.Type: GrantFiled: October 13, 2021Date of Patent: May 7, 2024Assignee: RICOH COMPANY, LTD.Inventors: Yuki Hori, Eri Watanabe, Takuro Yasuda, Kentaroh Hagita, Takuroh Naitoh, Hiroaki Tanaka, Terunori Koyama
-
Patent number: 11978253Abstract: An augmented reality customer interaction system includes a transparent panel having a first side and a second side that is opposite to the first side, and a camera device configured to capture visual data from an area adjacent to the second side of the transparent panel. The visual data includes identifying features of a customer located in the area with respect to the second side of the transparent panel. The system further includes a projection system configured to project information on the first side of the transparent panel. The information projected on the first side of the transparent panel may include customer interaction data retrieved from a data store based on the identifying features of the customer.Type: GrantFiled: February 15, 2022Date of Patent: May 7, 2024Assignee: Truist BankInventors: Michael Anthony Dascola, Jacob Atticus Grady, Kaitlyn Stahl
-
Patent number: 11978254Abstract: Systems and methods for video presentation and analytics for live sporting events are disclosed. At least two cameras are used for tracking objects during a live sporting event and generate video feeds to a server processor. The server processor is operable to match the video feeds and create a 3D model of the world based on the video feeds from the at least two cameras. 2D graphics are created from different perspectives based on the 3D model. Statistical data and analytical data related to object movement are produced and displayed on the 2D graphics. The present invention also provides a standard file format for object movement in space over a timeline across multiple sports.Type: GrantFiled: March 22, 2023Date of Patent: May 7, 2024Assignee: SPORTSMEDIA TECHNOLOGY CORPORATIONInventor: Gerard J. Hall
-
Patent number: 11978255Abstract: A recording control apparatus includes a recording control unit for storing photographing data corresponding to an event of a mobile object as event record data in a recording unit, a distance calculation unit for calculating a distance between a recording apparatus including at least the recording unit and the mobile object, and a communication control unit for transmitting the event record data stored in the recording unit when the distance calculated by the distance calculation unit becomes equal to or greater than a predetermined distance within a predetermined time period after the event detection unit detects the event.Type: GrantFiled: November 19, 2021Date of Patent: May 7, 2024Assignee: JVCKENWOOD CORPORATIONInventors: Keita Hayashi, Yasutoshi Sakai, Hirofumi Taniyama
-
Patent number: 11978256Abstract: A monitoring system is configured to monitor a property. The monitoring system includes a camera, a sensor, and a monitor control unit. The monitor control unit is configured to receive image data and sensor data. The monitor control unit is configured to determine that the image data includes a representation of a person. The monitor control unit is configured to determine an orientation of a representation of a head of the person. The monitor control unit is configured to determine that the representation of the head of the person likely includes a representation of a face of the person. The monitor control unit is configured to determine that the face of the person is likely concealed. The monitor control unit is configured to determine a malicious intent score that reflects a likelihood that the person has a malicious intent. The monitor control unit is configured to perform an action.Type: GrantFiled: March 18, 2021Date of Patent: May 7, 2024Assignee: Alarm.com IncorporatedInventors: Donald Madden, Achyut Boggaram, Gang Qian, Daniel Todd Kerzner
-
Patent number: 11978257Abstract: The invention concerns a device and a method for detecting and identifying a living or non-living entity allowing these entities to be transformed into a detectable object in order to facilitate their detection, recognition and identification. For this purpose, the device comprises at least one detectable electronic housing referred to as the real entity housing (42) and at least one detection module (142), the real entity housing (42) is associated with, integrated with, incorporated with or substituted, partially or not, for the real entity to be detected and identified (48), the real entity housing (42) broadcasts, unidirectionally and as a broadcast and without dialogue and in a loop, a real or virtual image or an avatar of the entity to be detected.Type: GrantFiled: October 18, 2019Date of Patent: May 7, 2024Assignees: PRODOSEInventor: Morou Boukari
-
Patent number: 11978258Abstract: Apparatuses, systems, and techniques to identify out-of-distribution input data in one or more neural networks. In at least one embodiment, a technique includes training one or more neural networks to infer a plurality of characteristics about input information based, at least in part, on the one or more neural networks being independently trained to infer each of the plurality of characteristics about the input information.Type: GrantFiled: April 6, 2021Date of Patent: May 7, 2024Assignee: NVIDIA CorporationInventors: Sina Mohseni, Arash Vahdat, Jay Yadawa
-
Patent number: 11978259Abstract: Systems and methods for operating a mobile platform. The methods comprise, by a computing device: obtaining a LiDAR point cloud; using the LiDAR point cloud to generate a track for a given object in accordance with a particle filter algorithm by generating states of a given object over time (each state has a score indicating a likelihood that a cuboid would be created given an acceleration value and an angular velocity value); using the track to train a machine learning algorithm to detect and classify objects based on sensor data; and/or causing the machine learning algorithm to be used for controlling movement of the mobile platform.Type: GrantFiled: July 9, 2021Date of Patent: May 7, 2024Assignee: Ford Global Technologies, LLCInventor: Kevin James Player
-
Patent number: 11978260Abstract: System and methods are disclosed for rapid license plate reading. A first image having a first resolution may be generated. A location of a license plate in the first image may be detected. The license plate may be read from a second image in accordance with the location of the license plate. The second image may have a second resolution greater than the first resolution. In embodiments, reading the license plate may comprise tracking the license plate across a plurality of license plate images.Type: GrantFiled: August 25, 2021Date of Patent: May 7, 2024Assignee: Axon Enterprise, Inc.Inventors: Matti Suksi, Jesse Hakanen, Juha Alakarhu
-
Patent number: 11978261Abstract: An information processing apparatus and an information processing method to properly acquire a location of a surrounding vehicle using three-dimensional detection information regarding an object around an own vehicle. A camera captures an image of surroundings of an own automobile, and a region of a vehicle in the captured image is detected as a frame, the vehicle being in the surroundings of the own automobile. Three-dimensional information regarding an object in the surroundings of the own automobile is detected, and a three-dimensional box that indicates a location of the vehicle in the surroundings of the own automobile is generated on the basis of the three-dimensional information. Correction is performed on the three-dimensional box on the basis of the frame, and the three-dimensional box is arranged to generate surrounding information.Type: GrantFiled: November 22, 2019Date of Patent: May 7, 2024Assignee: SONY SEMICONDUCTOR SOLUTIONS CORPORATIONInventor: Takafumi Shokonji
-
Patent number: 11978262Abstract: A method for training a classifier for image data using learning image data and associated labels, each of the labels including an allocation to one or multiple classes of a predefined classification. In the method, for each data set of learning image data, space-resolved relevance maps are provided, which indicate how relevant which spatial areas of the particular learning image data are for the assessment of the situation shown in the learning image data. From data sets of learning image data and associated relevance maps, learning samples are ascertained; the learning samples are fed to the classifier; and classifier parameters are optimized with the aim that the classifier maps the learning samples to allocations to one or multiple classes which are consistent with the labels of the learning image data from which the learning samples originate.Type: GrantFiled: June 24, 2021Date of Patent: May 7, 2024Assignee: ROBERT BOSCH GMBHInventor: Udo Mayer
-
Patent number: 11978263Abstract: A method for determining a safe state for a vehicle includes disposing a camera at a vehicle and disposing an electronic control unit (ECU) at the vehicle. Frames of image data are captured by the camera and provided to the ECU. An image processor of the ECU processes frames of image data captured by the camera. A condition is determined via processing at the image processor of the ECU frames of image data captured by the camera. The condition includes a shadow present in the field of view of the camera within ten frames of image data captured by the camera or a damaged condition of the imager within two minutes of operation of the camera. The ECU determines a safe state for the vehicle responsive to determining the condition.Type: GrantFiled: May 22, 2023Date of Patent: May 7, 2024Assignee: MAGNA ELECTRONICS INC.Inventors: Horst D. Diessner, Richard C. Bozich, Aleksandar Stefanovic, Anant Kumar Lall, Nikhil Gupta
-
Patent number: 11978264Abstract: Systems and methods for constructing and managing a unique road sign knowledge graph across various countries and regions is disclosed. The system utilizes machine learning methods to assist humans when comparing a new sign template with a plurality of stored sign templates to reduce or eliminate redundancy in the road sign knowledge graph. Such a machine learning method and system is also used in providing visual attributes of road signs such as sign shapes, colors, symbols, and the like. If the machine learning determines that the input road sign template is not found in the road sign knowledge graph, the input sign template can be added to the road sign knowledge graph. The road sign knowledge graph can be maintained to add signs templates that are not already in the knowledge graph but are found in real-world by integrating human annotator's feedback during ground truth generation for machine learning.Type: GrantFiled: August 17, 2021Date of Patent: May 7, 2024Assignee: Robert Bosch GmbHInventors: Ji Eun Kim, Kevin H. Huang, Mohammad Sadegh Norouzzadeh, Shashank Shekhar
-
Patent number: 11978265Abstract: A method for displaying lane information on an augmented reality display includes receiving roadway data. The roadway data includes information about a roadway along a route of a vehicle. The roadway includes a plurality of lanes. The roadway data includes lane information about at least one of the plurality of lanes along the route of the vehicle. The method further includes receiving vehicle-location data. The vehicle-location data indicates a location of the vehicle. The method further includes determining that that the vehicle is approaching a road junction using the vehicle-location data and the roadway data. The method further includes, in response to determining that the vehicle is approaching the road junction, transmitting a command signal to a dual-focal plane augmented reality display to display at least one virtual image that is indicative of the lane information.Type: GrantFiled: March 11, 2022Date of Patent: May 7, 2024Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Joseph F. Szczerba, John P. Weiss, Kai-Han Chang, Thomas A. Seder
-
Patent number: 11978266Abstract: In various examples, estimated field of view or gaze information of a user may be projected external to a vehicle and compared to vehicle perception information corresponding to an environment outside of the vehicle. As a result, interior monitoring of a driver or occupant of the vehicle may be used to determine whether the driver or occupant has processed or seen certain object types, environmental conditions, or other information exterior to the vehicle. For a more holistic understanding of the state of the user, attentiveness and/or cognitive load of the user may be monitored to determine whether one or more actions should be taken. As a result, notifications, AEB system activations, and/or other actions may be determined based on a more complete state of the user as determined based on cognitive load, attentiveness, and/or a comparison between external perception of the vehicle and estimated perception of the user.Type: GrantFiled: October 21, 2020Date of Patent: May 7, 2024Assignee: NVIDIA CorporationInventors: Nuri Murat Arar, Niranjan Avadhanam, Yuzhuo Ren
-
Patent number: 11978267Abstract: A method and related system operations include obtaining a video stream with an image sensor of a camera device, detecting a plurality of target objects by executing a neural network model based on the video stream with a vision processor unit of the camera device. The method also includes generating a plurality of bounding boxes, determining a plurality of character sequences by, for each respective bounding box of the plurality of bounding boxes, performing a set of optical character recognition (OCR) operations to determine a respective character sequence of the plurality of character sequences. The method also includes updating a plurality of tracklets to indicate the plurality of bounding boxes and storing the plurality of tracklets in association with the plurality of character sequences in a memory of the camera device.Type: GrantFiled: February 13, 2023Date of Patent: May 7, 2024Assignee: Verkada Inc.Inventors: Mayank Gupta, Suraj Arun Vathsa, Song Cao, Yi Xu, Yuanyuan Chen, Yunchao Gong
-
Patent number: 11978268Abstract: Methods, systems, and apparatus including computer programs encoded on a computer storage medium, for generating convex decomposition of objects using neural network models. One of the methods includes receiving an input that depicts an object. The input is processed using a neural network to generate an output that defines a convex representation of the object. The output includes, for each of a plurality of convex elements, respective parameters that define a position of the convex element in the convex representation of the object.Type: GrantFiled: November 18, 2022Date of Patent: May 7, 2024Assignee: Google LLCInventors: Boyang Deng, Kyle Genova, Soroosh Yazdani, Sofien Bouaziz, Geoffrey E. Hinton, Andrea Tagliasacchi
-
Patent number: 11978269Abstract: Aspects of the present disclosure include reconfigurable integrated circuits for characterizing particles of a sample in a flow stream. Reconfigurable integrated circuits according to certain embodiments are programmed to calculate parameters of a particle in a flow stream from detected light; compare the calculated parameters of the particle with parameters of one or more particle classifications; classify the particle based on the comparison between the parameters of the particle classifications and the calculated parameters of the particle; and adjust one or more parameters of the particle classifications based on the calculated parameters of the particle. Methods for characterizing particles in a flow stream with the subject integrated circuits are also described. Systems and integrated circuit devices programmed for practicing the subject methods, such as on a flow cytometer, are also provided.Type: GrantFiled: June 16, 2023Date of Patent: May 7, 2024Assignee: BECTON, DICKINSON AND COMPANYInventor: Paul Barclay Purcell
-
Patent number: 11978270Abstract: An AI-assisted automatic labeling system and a method thereof are disclosed. The method comprises steps: selecting images from microscopic images as candidate images, using a pre-labeling module to automatically label cells in the candidate images, and dividing the labeled images into training data and verification data; using a training module and the training data to train a basic model; using a verification module to verify and modify the basic model, wherein the verification module respectively verifies at least one cell area and at least one background area of the verification data to converge the basic model and form an automatic labeling model; using the automatic labeling model to automatically label cells in redundant images of the microscopic images. The basic model trained by the present invention can use few labeled images to perform regressive training and verification and then automatically labels the redundant images accurately and efficiently.Type: GrantFiled: December 14, 2021Date of Patent: May 7, 2024Assignee: V5med Inc.Inventors: Tzu-Kuei Shen, Chien Ting Yang, Guang-Hao Suen, Linda Siana, Liang-Wei Sheu
-
Patent number: 11978271Abstract: Systems and methods for image understanding can include one or more object recognition systems and one or more vision language models to generate an augmented language output that can be both scene-aware and object-aware. The systems and methods can process an input image with an object recognition model to generate an object recognition output descriptive of identification details for an object depicted in the input image. The systems and methods can include processing the input image with a vision language model to generate a language output descriptive of a predicted scene description. The object recognition output can then be utilized to augment the language output to generate an augmented language output that includes the scene understanding of the language output with the specificity of the object recognition output.Type: GrantFiled: October 27, 2023Date of Patent: May 7, 2024Assignee: GOOGLE LLCInventors: Harshit Kharbanda, Boris Bluntschli, Vibhuti Mahajan, Louis Wang
-
Patent number: 11978272Abstract: Adapting a machine learning model to process data that differs from training data used to configure the model for a specified objective is described. A domain adaptation system trains the model to process new domain data that differs from a training data domain by using the model to generate a feature representation for the new domain data, which describes different content types included in the new domain data. The domain adaptation system then generates a probability distribution for each discrete region of the new domain data, which describes a likelihood of the region including different content described by the feature representation. The probability distribution is compared to ground truth information for the new domain data to determine a loss function, which is used to refine model parameters. After determining that model outputs achieve a threshold similarity to the ground truth information, the model is output as a domain-agnostic model.Type: GrantFiled: August 9, 2022Date of Patent: May 7, 2024Assignee: Adobe Inc.Inventors: Kai Li, Christopher Alan Tensmeyer, Curtis Michael Wigington, Handong Zhao, Nikolaos Barmpalios, Tong Sun, Varun Manjunatha, Vlad Ion Morariu
-
Patent number: 11978273Abstract: Systems and techniques are provided for automatically analyzing and processing domain-specific image artifacts and document images. A process can include obtaining a plurality of document images comprising visual representations of structured text. An OCR-free machine learning model can be trained to automatically extract text data values from different types or classes of document image, based on using a corresponding region of interest (ROI) template corresponding to the structure of the document image type for at least initial rounds of annotations and training. The extracted information included in an inference prediction of the trained OCR-free machine learning model can be reviewed and validated or corrected correspondingly before being written to a database for use by one or more downstream analytical tasks.Type: GrantFiled: November 10, 2023Date of Patent: May 7, 2024Assignee: 32Health Inc.Inventors: Deepak Ramaswamy, Ravindra Kompella, Shaju Puthussery
-
Patent number: 11978274Abstract: A document creation support apparatus comprising at least one processor, wherein the processor is configured to: acquire an image and a character string related to the image; extract at least one feature region included in the image; specify a specific region that is a region corresponding to a phrase included in the character string, in the feature region; and present information for supporting creation of a document including the character string based on a result of the specifying.Type: GrantFiled: May 18, 2022Date of Patent: May 7, 2024Assignee: FUJIFILM CorporationInventor: Akimichi Ichinose
-
Patent number: 11978275Abstract: Methods and apparatus to monitor environments are disclosed. Example audience measurement devices disclosed herein execute, in connection with a first frame of data, a three-dimensional recognition analysis on an object detected in an environment within a threshold distance from a sensor. Disclosed example audience measurement devices also detect that the object has moved outside the threshold distance from the sensor in a second frame subsequent to the first frame. Disclosed example audience measurement devices further execute a two-dimensional recognition analysis on the object in the second frame.Type: GrantFiled: June 15, 2020Date of Patent: May 7, 2024Assignee: The Nielsen Company (US), LLCInventors: Morris Lee, Alejandro Terrazas
-
Patent number: 11978276Abstract: According to various embodiments, an electronic device is provided and includes a housing, a support frame which is arranged in an internal space of the housing and has a first surface, a second surface facing a direction opposite to the first surface, and a through hole, a display supported by the first surface and arranged to be seen from outside through at least a part of the housing, and an optical sensor module arranged in the second surface to face the through hole.Type: GrantFiled: November 11, 2022Date of Patent: May 7, 2024Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Junhee Han, Hanul Moon, Soohwan Kim, Seungjae Bae, Inho Shin, Jiyoung Lim, Yongwon Cho, Jiwoo Lee
-
Patent number: 11978277Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for under-display fingerprint sensor timing control are disclosed. A method includes receiving, by fingerprint sensor control circuitry, an indication to activate a fingerprint sensor that is located under a display panel of a computing device, the fingerprint sensor attached with respect to the display panel such that the fingerprint sensor is exposed to light produced by the display panel and reflected off a finger placed over the display panel at a location of the fingerprint sensor; outputting, for receipt by the fingerprint sensor, a start-sensing trigger signal at a start time synchronized with a display panel timing signal that is provided to the display panel to control emission of the display panel; and outputting, for receipt by the fingerprint sensor, a stop-sensing trigger signal at a stop time synchronized with the display panel timing signal.Type: GrantFiled: July 23, 2021Date of Patent: May 7, 2024Assignee: Google LLCInventors: Sangmoo Choi, Marek Mienko
-
Patent number: 11978278Abstract: A display arrangement comprising an optical biometric imaging device for imaging a biometric object comprising: an image sensor comprising a plurality of photodetector pixels; a lens arrangement comprising at least one lens configured to focus light reflected by a biometric object onto the image sensor; an aperture layer arranged between the object to be imaged and the image sensor, wherein the aperture layer comprises an aperture configured to limit the amount of light reaching the image sensor; and a filter element arranged in the aperture and configured to block light within a first wavelength range, wherein an area of the filter element is smaller than an area of the aperture so that a portion of light within the first wavelength range reaching the aperture layer pass through the aperture.Type: GrantFiled: November 23, 2021Date of Patent: May 7, 2024Assignee: FINGERPRINT CARDS ANACATUM IP ABInventors: Arvid Hammar, Hans Martinsson