Patents Examined by Amandeep Saini
-
Patent number: 11900657Abstract: A violation detection platform may obtain image data, location data, and sensor data associated with a vehicle. The violation detection platform may determine a probability that a frame of the image data includes an image of a stop sign. The violation detection platform may determine that the probability satisfies a probability threshold. The violation detection platform may identify location data and sensor data associated with the frame of the image data based on the probability satisfying the probability threshold. The violation detection platform may determine an occurrence of a type of a stop sign violation based on the probability, the location data, and the sensor data. The violation detection platform may perform one or more actions based on determining the occurrence of the type of the stop sign violation.Type: GrantFiled: August 24, 2020Date of Patent: February 13, 2024Assignee: Verizon Connect Development LimitedInventors: Luca Bravi, Luca Kubin, Leonardo Taccari, Francesco Sambo, Matteo Simoncini, Douglas Coimbra De Andrade, Stefano Caprasecca
-
Patent number: 11899743Abstract: Disclosed is a reconfigurable parallel 3-Dimensional (3-D) convolution engine for performing 3-D Convolution and parallel feature map extraction on an image. The reconfigurable parallel 3-D convolution engine further comprises a plurality of CNN reconfigurable engines configured to perform 3-D convolution, in parallel, to process a plurality of feature maps, a kernel memory space, present in each instance of CNN reconfigurable engine, capable for holding a set of parameters associated to a network layer having each operational instance of CNN reconfigurable engine, and at least one memory controller, an Input Feature Map Memory (FMM) cluster and an Output FMM cluster.Type: GrantFiled: December 29, 2020Date of Patent: February 13, 2024Assignee: HCL TECHNOLOGIES LIMITEDInventors: Prasanna Venkatesh Balasubramaniyan, Sainarayanan Gopalakrishnan, Gunamani Rajagopal
-
Patent number: 11885880Abstract: Adaptive phase unwrapping for a time-of-flight camera. A scene is illuminated with modulation light having two or more frequencies that do not have a common integer denominator. The modulation light that is reflected off objects within the scene is received at a sensor array. The received modulation light is then processed and weighted in the complex domain to determine unwrapped phases for each of the two or more frequencies of modulation light.Type: GrantFiled: July 31, 2019Date of Patent: January 30, 2024Assignee: Microsoft Technology Licensing, LLCInventor: Zhanping Xu
-
Patent number: 11880981Abstract: This disclosure relates generally to estimating age of a leaf using morphological features extracted from segmented leaves. Traditionally, leaf age estimation requires a single leaf to be plucked from the plant and its image to be captured in a controlled environment. The method and system of the present disclosure obviates these needs and enables obtaining one or more full leaves from images captured in an uncontrolled environment. The method comprises segmenting the image to identify veins of the leaves that further enable obtaining the full leaves. The obtained leaves further enable identifying an associated plant species. The method also discloses some morphological features which are fed to a pre-trained multivariable linear regression model to estimate age of every leaf. The estimated leaf age finds application in estimation of multiple plant characteristics like photosynthetic rate, transpiration, nitrogen content and health of the plants.Type: GrantFiled: September 2, 2021Date of Patent: January 23, 2024Assignee: TATA CONSULTANCY SERVICES LIMITEDInventors: Prakruti Vinodchandra Bhatt, Sanat Sarangi, Srinivasu Pappula, Avil Saunshi
-
Patent number: 11880747Abstract: An image recognition method, a training system for an object recognition model and a training method for an object recognition model are provided. The image recognition method includes the following steps. At least one original sample image of an object in a field and an object range information and an object type information in the original sample image are obtained. At least one physical parameter is adjusted to generate plural simulated sample images of the object. The object range information and the object type information of the object in each of the simulated sample images are automatically marked. A machine learning procedure is performed to train an object recognition model. An image recognition procedure is performed on an input image.Type: GrantFiled: December 27, 2019Date of Patent: January 23, 2024Assignee: INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTEInventors: Hsin-Cheng Lin, Sen-Yih Chou
-
Patent number: 11875587Abstract: An information processing system includes circuitry. The circuitry causes a terminal apparatus to display a setting screen for setting an extraction area for extracting, from a tabular image, an item value for each of one or more extraction target items. The setting screen displays, on the tabular image, an extraction guide representing the extraction area according to each of the one or more extraction target items. The circuitry further receives an operation of setting a position of the extraction guide on the setting screen.Type: GrantFiled: January 12, 2021Date of Patent: January 16, 2024Assignee: Ricoh Company, Ltd.Inventors: Hiroshi Kobayashi, Yoshiharu Tojo, Fumihiro Teshima
-
Patent number: 11869236Abstract: Data for training a machine learning algorithm to detect airborne objects is generated by capturing background images using a camera aboard an aerial vehicle, and generating a trajectory of a synthetic object, such another aerial vehicle, in three-dimensional space. The trajectory is projected into an image plane of the camera, determined from a pose of the camera calculated using inertial measurement unit data captured by the aerial vehicle. Images of the synthetic object may be rendered based on locations of the trajectory in the image plane at specific times. Pixel-accurate locations for the rendered images may be determined by calculating a homography from consecutive images captured using the camera, and adjusting locations of the trajectory using the homography. The rendered images may be blended into the images captured by the camera at such locations, and used to train a machine learning algorithm.Type: GrantFiled: August 24, 2020Date of Patent: January 9, 2024Assignee: Amazon Technologies, Inc.Inventor: Francesco Giuseppe Callari
-
Patent number: 11869272Abstract: A processor-implemented method includes: generating a preprocessed infrared (IR) image by performing first preprocessing based on an IR image including an object; generating a preprocessed depth image by performing second preprocessing based on a depth image including the object; and determining whether the object is a genuine object based on the preprocessed IR image and the preprocessed depth image.Type: GrantFiled: June 12, 2020Date of Patent: January 9, 2024Assignee: Samsung Electronics Co., Ltd.Inventors: Youngjun Kwak, Minsu Ko, Youngsung Kim, Heewon Kim, Ju Hwan Song, Byung In Yoo, Seon Min Rhee, Yong-il Lee, Jiho Choi, Seungju Han
-
Patent number: 11861923Abstract: The present disclosure describes a method, an apparatus, and a non-transitory computer-readable medium for detecting sensitive text information such as privacy-related text information from a signal and modifying the signal by removing the detected sensitive text information therefrom. The apparatus receives the signal such as an image, a video clip, or an audio clip, and recognizes a text string therefrom. The apparatus then detects, from the text string, a substring based on a similarity between the substring and a regular expression, and modifies the signal by removing information related to the detected substring from the signal.Type: GrantFiled: December 31, 2021Date of Patent: January 2, 2024Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Zhiwei Shang, Tongyu Ge, Zhijun Mo
-
Patent number: 11854275Abstract: Systems and methods for detecting symptoms of occupant illness is disclosed herein. In embodiments, a storage is configured to maintain a visualization application and data from one or more sources, such as an audio source, an image source, and/or a radar source. A processor is in communication with the storage and a user interface. The processor is programmed to receive data from the one or more sources, execute human-detection models based on the received data, execute activity-recognition models to recognize symptoms of illness based on the data from the one or more sources, determine a location of the recognized symptoms, and execute a visualization application to display information in the user interface. The visualization application can show a background image with an overlaid image that includes an indicator for each location of recognized symptom of illness. Additionally, data from the audio source, image source, and/or radar source can be fused.Type: GrantFiled: October 23, 2020Date of Patent: December 26, 2023Inventors: Sirajum Munir, Samarjit Das, Yunze Zeng, Vivek Jain
-
Patent number: 11854303Abstract: The Selectable Facial Recognition Assistance Method and System (SFRA) allows law enforcement officers, military personnel, and other users the ability to rapidly and accurately identify wanted or known personnel, by combining the ability to select the type of examination, selection of optimized face matching algorithms and expert face examiners. The SFRA allows the user to select “Quick Looks” or “Long Looks” with respect to the level of detail of examination, along with race-based selected algorithms and expert face examiners. The race-based focused examination combining specialized algorithms and examiners will produce the highest confidence levels of match or no-match based on photographs taken, submitted, as well as database photographs. The SFRA will greatly assist users and persons being photographed to avoid false matches and false no-matches. This system will greatly reduce misidentification, profiling, or any other identification weaknesses that the systems and users may have.Type: GrantFiled: August 9, 2019Date of Patent: December 26, 2023Inventor: Robert William Kocher
-
Patent number: 11854168Abstract: The method includes generating, for each of a plurality of original images, a first artificially degraded image by applying a first image-artifact-generation logic on each of the original images; and generating the program logic by training an untrained version of a first machine-learning logic that encodes a first artifacts-removal logic on the original images and their respectively generated first degraded images; and returning the trained first machine-learning logic as the program logic or as a component thereof. The first image-artifact-generation logic is A) an image-acquisition-system-specific image-artifact-generation logic or B) a tissue-staining-artifact-generation logic.Type: GrantFiled: November 10, 2022Date of Patent: December 26, 2023Assignee: HOFFMANN-LA ROCHE INC.Inventor: Eldad Klaiman
-
Patent number: 11847778Abstract: Methods and apparatuses are disclosed to personalize image capture operations of imaging equipment according to models that correspond uniquely to subjects being imaged. According to these techniques, a subject's face may be detected from a first image supplied by an image source and a first model of the subject may be developed from the detected face. The first model of the subject may be compared to another model of the subject developed from other images. Image adjustment parameters may be generated from a comparison of these models, which may control image adjustment techniques that are applied to the newly captured image of the subject. In this manner, aspects of the present disclosure may generate image capture operations that are tailored to characteristics of the subjects being imaged and avoid artifacts that otherwise could cause image degradation.Type: GrantFiled: August 21, 2020Date of Patent: December 19, 2023Assignee: APPLE INC.Inventors: Osman G. Sezer, Vinay Sharma, Alok Deshpande, Abhishek Singh
-
Patent number: 11842465Abstract: Systems and methods for motion correction in medical imaging are provided in the present disclosure. The systems may obtain at least two image sequences relating to a subject. Each of the at least two image sequences may be reconstructed based on image data that is acquired by a medical imaging device during one of at least two time periods. The subject may undergo a physiological motion during the at least two time periods. The systems may generate, based on the at least two image sequences, at least one corrected image sequence relating to the subject by correcting, using a motion correction model, an artifact caused by the physiological motion.Type: GrantFiled: November 21, 2020Date of Patent: December 12, 2023Assignee: SHANGHAI UNITED IMAGING HEALTHCARE CO., LTD.Inventors: Guotao Quan, Yi Wang, Jiao Tian
-
Patent number: 11842472Abstract: According to embodiments of the present invention, a method, a device and a computer program product for image processing is provided. A computing device obtains an image of a first object, the image presenting a defect of the first object. A computing device obtains defect distribution information indicating respective frequencies of a plurality of predetermined categories of defects presented at corresponding locations in a plurality of training images, the plurality of training images presenting second objects and being used for training a defect classifier. A computing device determines a target category of the defect of the first object by applying the image and the defect distribution information to the defect classifier. A computing device generates one or more correction notifications.Type: GrantFiled: March 31, 2020Date of Patent: December 12, 2023Assignee: International Business Machines CorporationInventors: Jinfeng Li, Guo Qiang Hu, Jian Xu, Fan Li, JingChang Huang, Jun Zhu
-
Patent number: 11836181Abstract: A system or process may generate a summarization of multimedia content by determining one or more salient moments therefrom. Multimedia content may be received and a plurality of frames and audio, visual, and metadata elements associated therewith are extracted from the multimedia content. A plurality of importance sub-scores may be generated for each frame of the multimedia content, each of the plurality of sub-scores being associated with a particular analytical modality. For each frame, the plurality of importance sub-scores associated therewith may be aggregated into an importance score. The frames may be ranked by importance and a plurality of top-ranked frames are identified and determined to satisfy an importance threshold. The plurality of top-ranked frames are sequentially arranged and merged into a plurality of moment candidates that are ranked for importance. A subset of top-ranked moment candidates are merged into a final summarization of the multimedia content.Type: GrantFiled: May 22, 2020Date of Patent: December 5, 2023Assignee: SalesTing, Inc.Inventors: Piyush Saggi, Nitesh Chhajwani, Thomas Ploetz
-
Patent number: 11836944Abstract: An apparatus that estimates a position of each object in image data in which a plurality of objects is imaged, the apparatus includes a first acquisition unit configured to acquire position information indicating positions of joints of the plurality of objects in the image data, a second acquisition unit configured to acquire a score map in which a feature for identifying each object is converted into a numerical value, the score map being output by a pre-trained model in response to input of the image data, and an identification unit configured to identify positions of joints belonging to each of the plurality of objects, based on the position information and the score map.Type: GrantFiled: November 11, 2020Date of Patent: December 5, 2023Assignee: CANON KABUSHIKI KAISHAInventor: Shuhei Ogawa
-
Patent number: 11823474Abstract: The present disclosure relates to a handwritten text recognition method, including: acquiring an information sequence including a plurality of track points of handwritten text, wherein information on each track point comprises its abscissa, writing time and writing state value; dividing the plurality of track points into a plurality of strokes according to the writing state value of each track point, the writing state value including a first value representative of stroke pen-up and a second value representative of stroke pen-down, respectively; calculating a first segmentation threshold of the handwritten text; determining a first text segmentation point according to a result of comparison between an absolute value of a difference between abscissas of a start track point of one stroke and an end track point of its previous stroke and the first segmentation threshold; and performing text segmentation according to the first text segmentation point to obtain a text segmentation result.Type: GrantFiled: October 27, 2020Date of Patent: November 21, 2023Assignee: BOE Technology Group Co., Ltd.Inventors: Huanhuan Zhang, Guangwei Huang
-
Patent number: 11823411Abstract: According to an embodiment, a reading system includes a first extractor and a reader. The first extractor extracts, from an image in which a meter is imaged, a first region surrounded with a first contour, and a second region surrounded with a second contour positioned outward of the first contour. The reader calculates a first indication based on the first region, calculates a second indication based on the second region, calculates a first score relating to the first indication based on the first region, and calculates a second score relating to the second indication based on the second region.Type: GrantFiled: April 15, 2022Date of Patent: November 21, 2023Assignees: KABUSHIKI KAISHA TOSHIBA, TOSHIBA DIGITAL SOLUTIONS CORPORATIONInventor: Chiayu Lin
-
Patent number: 11823379Abstract: The present disclosure provides a computer-implemented method, a device, and a computer program product using a user-guided domain adaptation (UGDA) architecture. The method includes training a combined model using a source image dataset by minimizing a supervised loss of the combined model to obtain first sharing weights for a first FCN and second sharing weights for a second FCN; training a discriminator by inputting extreme-point/mask prediction pairs for each of the source image dataset and a target image dataset and by minimizing a discriminator loss to obtain discriminator weights; and finetuning the combined model by predicting extreme-point/mask prediction pairs for the target image dataset to fool the discriminator by matching a distribution of the extreme-point/mask prediction pairs for the target image dataset with a distribution of the extreme-point/mask prediction pairs for the source image dataset.Type: GrantFiled: December 30, 2020Date of Patent: November 21, 2023Assignee: PING AN TECHNOLOGY (SHENZHEN) CO., LTD.Inventors: Adam P Harrison, Ashwin Raju