Patents Issued in January 29, 2019
-
Patent number: 10192100Abstract: A particle classifier system and a method of training the system are described.Type: GrantFiled: February 2, 2018Date of Patent: January 29, 2019Assignee: KONINKLIJKE PHILIPS N.V.Inventors: Martin Christian Valvik, Niels Agersnap Larsen, Tommy Winther Berg
-
Patent number: 10192101Abstract: A method, system, device, and/or a non-transitory computer readable medium to provide a customized application associated with a television experience based on the recognition of users located in front of a television display and in the field of view of a camera. The method may include performing an initializing operation, the initializing operation including enrolling a plurality of users in a database of a computer system, acquiring a wide image using the camera and scanning the wide image for biometric information; and performing an identification operation requested by the application including, acquiring a second wide image with the camera, extracting an active area from the second wide image, storing the extracted active area as a second fast scanning area image, and extracting the biometric data of a face appearing in the second fast scanning area image.Type: GrantFiled: July 13, 2017Date of Patent: January 29, 2019Assignee: NAGRAVISION S.A.Inventors: Christophe Oddou, Thierry Dagaeff, Nicholas Fishwick
-
Patent number: 10192102Abstract: In one embodiment, a computing device determines a Completely Automated Public Turing Test to Tell Computers and Humans Apart (CAPTCHA). The CAPTCHA includes a first static image that has image sections that are arranged in a first order. Each of the image sections corresponds to a unique identifier. The CAPTCHA further includes a second static image that includes each of the image sections of the first static image that are arranged in a second order. The computing device generates web-browser-executable code for converting the second static image to the first static image based on the first static image, the first order, and the unique identifiers. The computing device sends the second static image and the web-browser-executable code to a client device.Type: GrantFiled: April 17, 2017Date of Patent: January 29, 2019Assignee: Facebook, Inc.Inventor: Jonathan Frank
-
Patent number: 10192103Abstract: A system and method for performing facial recognition is described. In some implementations, the system and method identify points of a three-dimensional scan that are associated with occlusions, such as eyeglasses, to a face of a target subject and remove the identified points from the three-dimensional scan.Type: GrantFiled: January 13, 2017Date of Patent: January 29, 2019Assignee: StereoVision Imaging, Inc.Inventors: Raghavender Reddy Jillela, Trina D. Russ
-
Patent number: 10192104Abstract: Systems and methods are provided for authenticating a user of a computing device. The system comprises one or more memory devices storing instructions, and one or more processors configured to execute the instructions to provide, to a computing device associated with a user, an indication of a prescribed authentication parameter. The system also receives image data including an image of the user of the computing device captured using an image sensor of the computing device. The system determines an identity of the user based on an analysis of the received image data, determines whether the received image data includes a feature corresponding to the prescribed authentication parameter, and authenticates the user based at least in part on whether the received image data includes the feature corresponding to the prescribed authentication parameter.Type: GrantFiled: October 20, 2017Date of Patent: January 29, 2019Assignee: Capital One Services, LLCInventor: Colin Robert MacDonald
-
Patent number: 10192105Abstract: A sign language recognizer is configured to detect interest points in an extracted sign language feature, wherein the interest points are localized in space and time in each image acquired from a plurality of frames of a sign language video; apply a filter to determine one or more extrema of a central region of the interest points; associate features with each interest point using a neighboring pixel function; cluster a group of extracted sign language features from the images based on a similarity between the extracted sign language features; represent each image by a histogram of visual words corresponding to the respective image to generate a code book; train a classifier to classify each extracted sign language feature using the code book; detect a posture in each frame of the sign language video using the trained classifier; and construct a sign gesture based on the detected postures.Type: GrantFiled: June 27, 2018Date of Patent: January 29, 2019Assignee: King Fahd University of Petroleum and MineralsInventors: Sabri A. Mahmoud, Ala Addin Sidig
-
Patent number: 10192106Abstract: A moving object detection apparatus that analyzes a photographic image captured by an onboard camera and detects a moving object is provided. The moving object detection apparatus includes an imaging portion that captures the photographic image at a predetermined time interval; a peripheral region detection portion that detects a first moving object of a size smaller than a predetermined size and a second moving object of a size larger than the predetermined size as the moving object in a peripheral region; and a central region detection portion that detects the first moving object as the moving object in a central region. The central region detection portion detects the first moving object and the second moving object as the moving object when the second moving object has been detected in the peripheral region at a previous time.Type: GrantFiled: January 9, 2015Date of Patent: January 29, 2019Assignee: DENSO CORPORATIONInventor: Shusaku Shigemura
-
Patent number: 10192107Abstract: An object detection method and an object detection apparatus are provided. The object detection method includes: mapping at least one image frame in an image sequence into a three dimensional physical space to obtain three dimensional coordinates of each pixel in the at least one image frame; extracting a foreground region in the at least one image frame; segmenting the foreground region into a set of blobs; and detecting, for each blob in the set of blobs, an object in the blob through a neural network based on the three dimensional coordinates of at least one predetermined reference point in the blob, to obtain an object detection result.Type: GrantFiled: March 15, 2018Date of Patent: January 29, 2019Assignees: BEIJING KUANGSHI TECHNOLOGY CO., LTD., MEGVII (BEIJING) TECHNOLOGY CO., LTD.Inventors: Gang Yu, Chao Li, Qizheng He, Qi Yin
-
Patent number: 10192108Abstract: Systems and methods are provided for assessing whether mobile deposit processing engines meet specified standards for mobile deposit of financial documents. A mobile deposit processing engine (MDE) is evaluated to determine if it can perform technical capabilities for improving the quality of and extracting content from an image of a financial document. A verification process then begins, where the MDE performs the image quality enhancements and text extraction steps on sets of images from a test deck. The results of the processing of the test deck are then evaluated by comparing confidence levels with thresholds to determine if each set of images should be accepted or rejected. Further analysis determines whether any of the sets of images were falsely accepted or rejected in error. An overall error rate is then compared with minimum accuracy criteria, and if the criteria are met, the MDE meets the standard for mobile deposit.Type: GrantFiled: March 17, 2015Date of Patent: January 29, 2019Assignee: MITEK SYSTEMS, INC.Inventors: Grigori Nepomniachtchi, Mike Strange
-
Patent number: 10192109Abstract: According to the invention a system for authenticating a user of a device is disclosed. The system may include a first image sensor, a determination unit, and an authentication unit. The first image sensor may be for capturing at least one image of at least part of a user. The determination unit may be for determining information relating to the user's eye based at least in part on at least one image captured by the first image sensor. The authentication unit may be for authenticating the user using the information relating to the user's eye.Type: GrantFiled: April 18, 2016Date of Patent: January 29, 2019Assignee: Tobii ABInventors: Mårten Skogö, Richard Hainzl, Henrik Jönsson, Anders Vennström, Erland George-Svahn, John Elvesjö
-
Patent number: 10192110Abstract: There is provided a vehicle safety system including a sensing unit, a processing unit, a control unit and a display unit. The sensing unit is configured to capture an image frame containing an eyeball image from a predetermined distance. The processing unit is configured to calculate a pupil position of the eyeball image in the image frame and generate a drive signal corresponding to the pupil position. The control unit is configured to trigger a vehicle device associated with the pupil position according to the drive signal. The display unit is configured to show information of the vehicle device.Type: GrantFiled: August 25, 2017Date of Patent: January 29, 2019Assignee: PIXART IMAGING INC.Inventors: Chun-Wei Chen, Shih-Wei Kuo
-
Patent number: 10192111Abstract: Aspects of the subject disclosure may include, for example, a method comprising obtaining, by a processing system including a processor, first and second models for a structure of an object, based respectively on ground-level and aerial observations of the object. Model parameters are determined for a three-dimensional (3D) third model of the object based on the first and second models; the determining comprises a transfer learning procedure. Data representing observations of the object is captured at an airborne unmanned aircraft system (UAS) operating at an altitude between that of the ground-level observations and the aerial observations. The method also comprises dynamically adjusting the third model in accordance with the operating altitude of the UAS; updating the adjusted third model in accordance with the data; and determining a 3D representation of the structure of the object, based on the updated adjusted third model. Other embodiments are disclosed.Type: GrantFiled: March 10, 2017Date of Patent: January 29, 2019Assignee: AT&T Intellectual Property I, L.P.Inventor: Raghuraman Gopalan
-
Patent number: 10192112Abstract: Field data is collected of a field. Each instance of field data contains information that can be used to determine a value corresponding to whether or not a plant is present or absent in a particular location and is referred to as a plant presence value. The plant presence values are aggregated using the position data associated with each instance of field data to generate aggregated plant presence values. Gaps between plots are identified based partly on variations in the plant presence values within the aggregated field data. Information known about a field can be used to heuristically identify gaps in a seed line or used to eliminate locations on a seed line that may look like a gap based on low plant presence values. The aggregated plant presence values can be presented as a heat map of plant presence values showing the relative plant density of the field.Type: GrantFiled: November 2, 2016Date of Patent: January 29, 2019Assignee: Blue River Technology Inc.Inventors: Lee Kamp Redden, James Patrick Ostrowski, James Willis, Zeb Wheeler
-
Patent number: 10192113Abstract: The described positional awareness techniques employing sensory data gathering and analysis hardware with reference to specific example implementations implement improvements in the use of sensors, techniques and hardware design that can enable specific embodiments to provide positional awareness to machines with improved speed and accuracy. The sensory data are gathered from multiple operational cameras and one or more auxiliary sensors.Type: GrantFiled: July 5, 2017Date of Patent: January 29, 2019Assignee: PerceptIn, Inc.Inventors: Shaoshan Liu, Zhe Zhang, Grace Tsai
-
Patent number: 10192114Abstract: Some aspects of the invention relate to a mobile apparatus including an image sensor configured to convert an optical image into an electrical signal. The optical image includes an image of a vehicle license plate. The mobile apparatus includes a license plate detector configured to process the electrical signal to recover information from the vehicle license plate image. The mobile apparatus includes an interface configured to transmit the vehicle license plate information to a remote apparatus and receive a vehicle history report corresponding to the vehicle license plate image in response to the transmission.Type: GrantFiled: August 21, 2017Date of Patent: January 29, 2019Assignee: BLINKER, INC.Inventors: Anthony Russell Wilbert, Hans Brandon Wach, David Ching-Chien Chung
-
Patent number: 10192115Abstract: Described herein are a system and methods for generating a record of objects, as well as respective positions for those objects, with respect to a user. In some embodiments, a user may use a user device to scan an area that includes one or more objects. The one or more objects may be identified from image information obtained from the user device. Positional information for each of the one or more objects may be determined from depth information obtained from a depth sensor installed upon the user device. In some embodiments, the one or more objects may be mapped to object models stored in an object model database. The image information displayed on the user device may be augmented so that it depicts the object models associated with the one or more objects instead of the actual objects.Type: GrantFiled: December 13, 2017Date of Patent: January 29, 2019Assignee: Lowe's Companies, Inc.Inventors: Mason E. Sheffield, Josh Shabtai
-
Patent number: 10192116Abstract: The disclosure relates to recognizing data such as items or entities in content. In some aspects, content may be received and feature information, such as face recognition data and voice recognition data may be generated. Scene segmentation may also be performed on the content, grouping the various shots of the video content into one or more shot collections, such as scenes. For example, a decision lattice representative of possible scene segmentations may be determined and the most probable path through the decision lattice may be selected as the scene segmentation. Upon generating the feature information and performing the scene segmentation, one or more items or entities that are present in the scene may be identified.Type: GrantFiled: May 27, 2016Date of Patent: January 29, 2019Assignee: Comcast Cable Communications, LLCInventors: Jan Neumann, Evelyne Tzoukermann, Amit Bagga, Oliver Jojic, Bageshree Shevade, David F. Houghton, Corey Farrell
-
Patent number: 10192117Abstract: A method for graph-based spatiotemporal video segmentation and automatic target object extraction in high-dimensional feature space includes using a processor to automatically analyze an entire volumetric video sequence; using the processor to construct a high-dimensional feature space that includes color, motion, time, and location information so that pixels in the entire volumetric video sequence are reorganized according to their unique and distinguishable feature vectors; using the processor to create a graph model that fuses the appearance, spatial, and temporal information of all pixels of the video sequence in the high-dimensional feature space; and using the processor to group pixels in the graph model that are inherently similar and assign the same labels to them to form semantic spatiotemporal key segments.Type: GrantFiled: May 27, 2016Date of Patent: January 29, 2019Assignee: KODAK ALARIS INC.Inventors: Alexander C. Loui, Lei Fan
-
Patent number: 10192118Abstract: Provided is an analysis device, including: a processor configured to implement an acquisition function of acquiring information indicating play events that are defined based on a motion of a user who plays a sport and arranged within a time interval, and a pattern estimation function of estimating a play pattern based on an arrangement of the play events.Type: GrantFiled: October 22, 2014Date of Patent: January 29, 2019Assignee: SONY CORPORATIONInventors: Hideyuki Matsunaga, Yusuke Watanabe
-
Patent number: 10192119Abstract: A method for generating a summary video sequence from a source video sequence is disclosed. The method comprises: identifying, in the source video sequence, event video sequences, wherein each event video sequence comprises consecutive video frames in which one or more objects of interest are present; extracting, from video frames of one or more event video sequences of the event video sequences, pixels depicting the respective one or more objects of interest; while keeping spatial and temporal relations of the extracted pixels as in the source video sequence, overlaying the extracted pixels of the video frames of the one or more event video sequences onto video frames of a main event video sequence of the event video sequences, thereby generating the summary video sequence. A video processing device configured to generate the summary video sequence is also disclosed.Type: GrantFiled: May 23, 2017Date of Patent: January 29, 2019Assignee: Axis ABInventors: Christian Ljungberg, Erik Nilsson
-
Patent number: 10192120Abstract: An electronic device with a display, processor(s), and memory displays a video monitoring user interface including a video feed from a camera located remotely from the client device in a first region and an event timeline in a second region, the event timeline including event indicators for motion events previously detected by the camera. The electronic device detects a user input selecting a portion of the event timeline, where the selected portion of the event timeline includes a subset of the event indicators. In response to the user input, the electronic device causes generation of a time-lapse video clip of the selected portion of the event timeline. The electronic device displays the time-lapse video clip, where motion events corresponding to the subset of the event indicators are played at a slower speed than the remainder of the selected portion of the event timeline.Type: GrantFiled: July 1, 2015Date of Patent: January 29, 2019Assignee: GOOGLE LLCInventors: Jason N. Laska, Greg R. Nelson, Greg Duffy, Hiro Mitsuji, Lawrence W. Neal, Cameron Hill
-
Patent number: 10192121Abstract: A display system and method for displaying an image to the driver of a vehicle, in particular a commercial vehicle. The display system has a capturing device mountable to the vehicle and adapted to capture at least part of the vehicle's immediate environment, and to generate signals corresponding to the captured part of the vehicle's immediate environment, a calculation unit adapted to receive the signals generated by the capturing device, determine obstacles within the captured immediate environment of the vehicle, and generate a display image illustrating the vehicle in stylized or symbolic representation, and the obstacle identified in the vehicle's immediate environment as well as its relative position with regard to the vehicle, and a rendering unit adapted to display the display image generated by the calculation unit within the vehicle and visible for the driver.Type: GrantFiled: September 18, 2015Date of Patent: January 29, 2019Assignee: MEKRA LANG NORTH AMERICA, LLCInventors: Andreas Enz, Werner Lang
-
Patent number: 10192122Abstract: A driving assist apparatus includes an image acquisition part to acquire a captured image around a vehicle, a position information acquisition part to acquire position information of a first object existing around the vehicle and detected by a sensor, a detection range determination part to determine a detection range of the first object within the captured image based on the position information of the first object, and an object recognition part to perform image processing on, within the captured image, an image existing in a range other than the detection range of the first object, thereby to recognize a second object that is different from the first object. Hence, a processing time taken for recognizing the second object from the captured image can be shortened.Type: GrantFiled: August 21, 2014Date of Patent: January 29, 2019Assignee: MITSUBISHI ELECTRIC CORPORATIONInventors: Michinori Yoshida, Naoyuki Tsushima
-
Patent number: 10192124Abstract: Disclosed is an optical unit wherein a rotating reflector rotates about a rotation axis in one direction, while reflecting light emitted from a light source. The rotating reflector is provided with a reflecting surface such that the light reflected by the rotating reflector, while rotating, forms a desired light distribution pattern, said light having been emitted from the light source. The light source is composed of light emitting elements. The rotation axis is provided within a plane that includes an optical axis and the light source. The rotating reflector is provided with, on the periphery of the rotation axis, a blade that functions as the reflecting surface.Type: GrantFiled: April 12, 2011Date of Patent: January 29, 2019Assignee: KOITO MANUFACTURING CO., LTD.Inventor: Satoshi Yamamura
-
Patent number: 10192125Abstract: A vehicle is disclosed that includes systems for adjusting the transmittance of one or more windows of the vehicle. The vehicle may include a camera outputting images taken of an occupant within the vehicle. The vehicle may also include an artificial neural network running on computer hardware carried on-board the vehicle. The artificial neural network may be trained to classify the occupant of the vehicle using the images captured by the camera as input. The vehicle may further include a controller controlling transmittance of the one or more windows based on classifications made by the artificial neural network. For example, if the artificial neural network classifies the occupant as squinting or shading his or her eyes with a hand, the controller may reduce the transmittance of a windshield, side window, or some combination thereof.Type: GrantFiled: October 20, 2016Date of Patent: January 29, 2019Assignee: FORD GLOBAL TECHNOLOGIES, LLCInventors: Scott Vincent Myers, Alexandro Walsh, Francois Charette, Lisa Scaria
-
Patent number: 10192126Abstract: Provided is a behavior recognition apparatus, including a detection unit configured to detect, based on a vehicle interior image obtained by photographing a vehicle interior, positions of a plurality of body parts of a person inside a vehicle in the vehicle interior image; a feature extraction unit configured to extract a rank-order feature which is a feature based on a rank-order of a magnitude of a distance between parts obtained by the detection unit; and a discrimination unit configured to discriminate a behavior of an occupant in the vehicle using a discriminator learned in advance and the rank-order feature extracted by the feature extraction unit. Also provided is a learning apparatus to learn the discrimination unit.Type: GrantFiled: April 20, 2017Date of Patent: January 29, 2019Assignee: TOYOTA JIDOSHA KABUSHIKI KAISHAInventors: Masao Yamanaka, Toshifumi Nishijima
-
Patent number: 10192127Abstract: Embodiments of the present invention provide a system for dynamically tuning optical character recognition (OCR) processes. The system receives or captures an image of a resource document and uses a general or default OCR process to identify a source of the document and values of multiple data fields in the image of the document. When the system determines that a data field is missing or cannot be extracted, it causes a computing device to display the image of the resource document and requests user input of a coordinate area of the missing data field from an associated specialist. Once the user input is received, the system applies a data field-specific OCR process on the coordinate area of the missing data field to extract the value of the data field. This value of the missing data field can be transmitted to a processing system for further processing.Type: GrantFiled: July 24, 2017Date of Patent: January 29, 2019Assignee: Bank of America CorporationInventors: John B. Hall, Michael J. Pepe, Jr., Murali Santhanam, Kerry Kurt Simpkins
-
Patent number: 10192128Abstract: Provided is a technique for enhancing operability of a mobile apparatus. An information processing apparatus (2000) includes a first processing unit (2020), a second processing unit (2040), and a control unit (2060). The first processing unit (2020) generates information indicating an event detection position in accordance with a position on a surveillance image set in a first operation. The first operation is an operation with respect to the surveillance image displayed on a display screen. The second processing unit (2040) performs a display change process with respect to the surveillance image or a window including the surveillance image. The control unit (2060) causes any one of the first processing unit (2020) and the second processing unit (2040) to process the first operation on the basis of a second operation.Type: GrantFiled: March 27, 2015Date of Patent: January 29, 2019Assignee: NEC CORPORATIONInventors: Kenichiro Ida, Hiroshi Kitajima, Hiroyoshi Miyano
-
Patent number: 10192129Abstract: Systems and methods are disclosed for selecting target objects within digital images. In particular, in one or more embodiments, the disclosed systems and methods generate a trained neural network based on training digital images and training indicators. Moreover, one or more embodiments of the disclosed systems and methods utilize a trained neural network and iterative user indicators to select targeted objects in digital images. Specifically, the disclosed systems and methods can transform user indicators into distance maps that can be utilized in conjunction with color channels and a trained neural network to identify pixels that reflect the target object.Type: GrantFiled: November 18, 2015Date of Patent: January 29, 2019Assignee: ADOBE SYSTEMS INCORPORATEDInventors: Brian Price, Scott Cohen, Ning Xu
-
Patent number: 10192130Abstract: Some aspects of the invention relate to a mobile apparatus including an image sensor configured to convert an optical image into an electrical signal. The optical image includes an image of a vehicle license plate. The mobile apparatus includes a license plate detector configured to process the electrical signal to recover information from the vehicle license plate image. The mobile apparatus includes an interface configured to transmit the vehicle license plate information to a remote apparatus and receive an estimated value for a vehicle corresponding to the vehicle license plate in response to the transmission.Type: GrantFiled: March 6, 2017Date of Patent: January 29, 2019Assignee: BLINKER, INC.Inventors: Anthony Russell Wilbert, Hans Brandon Wach, David Ching-Chien Chung
-
Patent number: 10192131Abstract: An image identification system may identify key points on a known image, variations of the known image in different levels of blur, and an unidentified image. One or more geometric shapes may be formed from the key points. A match between the unidentified image and either the known image or a blurred variation of the known image may be determined by comparison of the respective geometric shapes.Type: GrantFiled: August 7, 2018Date of Patent: January 29, 2019Assignee: Blinkfire Analytics, Inc.Inventors: Stephen Joseph Olechowski, III, Nan Jiang, Alejandro Tatay de Pascual
-
Patent number: 10192132Abstract: A method and apparatus for extraction of dots in an image are described. An image is binarized according to an initial intensity threshold to obtain an initial binary image including foreground and background pixels. Each foreground pixel has a foreground intensity value and each background pixel has a background intensity value. A set of blobs including foreground pixels is selected from the initial binary image to be part of a selected set of dots, where each blob from the set of blobs has characteristics of a dot. Responsive to determining that a successive binarization is to be performed, the following operations are repeated: (1) binarization of the image according to a successive intensity threshold (2) selection of a successive set of blobs, where each blob has characteristics of a dot. Responsive to determining that a successive binarization is not to be performed, the selected set of dots is output.Type: GrantFiled: September 27, 2016Date of Patent: January 29, 2019Assignee: MATROX ELECTRONIC SYSTEMS LTD.Inventor: Dominique Rivard
-
Patent number: 10192133Abstract: A marker whose at least one of a position and pose with respect to a capturing unit is estimated includes: quadrilateral specifying points that specify a quadrilateral shape; a first circle group that is a group of a plurality of circles whose centers are present in a line of a first diagonal which is one of two diagonals of the specified quadrilateral shape, and which are included in the quadrilateral shape; a second circle group that is a group of a plurality of circles whose centers are present in a line of a second diagonal which is the other diagonal of the two diagonals than the first diagonal, and which are included in the quadrilateral shape; and a direction-identification point that specifies a direction of the quadrilateral shape.Type: GrantFiled: May 19, 2016Date of Patent: January 29, 2019Assignee: SEIKO EPSON CORPORATIONInventors: Yang Yang, Guoyi Fu
-
Patent number: 10192134Abstract: Embodiments are disclosed that relate to color identification. In one example, an image processing method comprises receiving an infrared (IR) image including a plurality of IR pixels, each IR pixel specifying one or more IR parameters of that IR pixel, identifying, in the IR image, IR-skin pixels that image human skin, identifying a skin tone of identified human skin based at least in part on the IR-skin pixels, the skin tone having one or more expected visible light (VL) parameters, receiving a VL image including a plurality of VL pixels, each VL pixel specifying one or more VL parameters of that VL pixel, identifying, in the VL image, VL-skin pixels that image identified human skin, and adjusting the VL image to increase a correspondence between the one or more VL parameters of the VL-skin pixels and the one or more expected VL parameters of the identified skin tone.Type: GrantFiled: February 6, 2015Date of Patent: January 29, 2019Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Lin Liang, Christian F. Huitema
-
Patent number: 10192135Abstract: A 3D image analyzer for the determination of a gaze direction or a line of sight (having a gaze direction vector and a location vector, which e.g. indicates the pupil midpoint and where the gaze direction vector starts) in a 3D room is configured to receive one first set of image data and a further set of image information, wherein the first image contains a pattern, which displays a three-dimensional object from a first perspective into a first image plane, and wherein the further set contains an image having a pattern, which displays the same three-dimensional object from a further perspective into a further image plane, or wherein the further set has an image information and/or a relation between at least two points in the first image and/or at least a position information. The 3D image analyzer has a position calculator and an alignment calculator and calculates therewith a gaze direction in a 3D room.Type: GrantFiled: July 28, 2016Date of Patent: January 29, 2019Assignee: Fraunhofer-Gesellschaft zur Foerderung der angewandten Forschung e.V.Inventors: Daniel Krenzer, Albrecht Hess, András Kátai
-
Patent number: 10192136Abstract: An image processing apparatus and an image processing method notify a user of an object region from which an object cannot be recognized. The image processing apparatus includes an interface and a processor. The interface receives an input image. The processor extracts a first target object region from the input image and reads first identification information from the first target object region. If the first identification information fails to be read, the processor outputs an output image which includes the first target region and information representing read failure of the first target region.Type: GrantFiled: October 20, 2016Date of Patent: January 29, 2019Assignee: TOSHIBA TEC KABUSHIKI KAISHAInventors: Takayuki Sawada, Yuishi Takeno
-
Patent number: 10192137Abstract: In some implementations, a method includes: receiving, from the camera, a sample image that includes a fingerprint and a mensuration reference device, where the sample image is associated with a resolution; identifying (i) a plurality of edge candidate groups within the sample image, and (ii) a set of regularity characteristics associated with each of the plurality of edge candidate groups; determining that the associated set of regularity characteristics indicates the mensuration reference device; identifying a ruler candidate group, from each of the plurality of edge candidate groups, based at least on determining that the associated set of regularity characteristics indicates the mensuration reference device; computing a scale associated with the sample image based at least on extracting a set of ruler marks from the identified ruler candidate group; and generating, based at least on the scale associated with the sample image, a scaled image.Type: GrantFiled: November 16, 2015Date of Patent: January 29, 2019Assignee: MorphoTrak, LLCInventors: Darrell Hougen, Peter Zhen-Ping Lo
-
Patent number: 10192138Abstract: Techniques and systems are provided for identifying unknown content. For example, a number of vectors out of a plurality of vectors projected from an origin point can be determined that are between a reference data point and an unknown data point. The number of vectors can be used to estimate an angle between a first vector (from the origin point to a reference data point) and a second vector (from the origin point to an unknown data point). A distance between the reference data point and the unknown data point can then be determined. Using the determined distance, candidate data points can be determined from a set of reference data points. The candidate data points can be analyzed to identify the unknown data point.Type: GrantFiled: April 15, 2016Date of Patent: January 29, 2019Assignee: INSCAPE DATA, INC.Inventor: Zeev Neumeier
-
Patent number: 10192139Abstract: The presently disclosed subject matter includes a tracking system and method which for tracking objects by a sensing unit operable to communicate over a communication link with a control center which enables to execute a command generated at the control center with respect to a selected object in an image captured by the sensing unit, notwithstanding a time-delay between a time when the sensing unit acquires the image with the selected object, to a time when the command is received at the sensing unit with respect to the selected object.Type: GrantFiled: May 8, 2013Date of Patent: January 29, 2019Assignee: ISRAEL AEROSPACE INDUSTRIES LTD.Inventor: Meir Weichselbaum
-
Patent number: 10192140Abstract: Improvements are disclosed for detecting counterfeit objects, based on comparison to digital fingerprints that describe features found in images of objects known to be counterfeit.Type: GrantFiled: July 12, 2016Date of Patent: January 29, 2019Assignee: Alitheon, Inc.Inventors: David Justin Ross, Brian J. Elmenhurst, Mark Tocci, John Forbes, Heather Wheelock Ross
-
Patent number: 10192141Abstract: This disclosure relates to a method in which points common to two or more of the images that appear to represent the same real world features are identified, and changes in the location of the points between respective images are used to deduce the motion of the camera and to find the position of the real world features in three dimensional space. In order to determine the scale of the three dimensional information, the position of a reference feature, whose actual distance from the camera is known, is found from the images. The reference feature is found by considering only candidate points selected from candidate points falling within a portion of the image corresponding to a part of the field of view of the camera. The scale is determined from the distance between the camera and the reference feature in the image and in real life.Type: GrantFiled: November 22, 2016Date of Patent: January 29, 2019Assignee: APPLICATION SOLUTIONS (ELECTRONICS AND VISION) LTD.Inventor: Robin Plowman
-
Patent number: 10192142Abstract: A computer executed method for supervised facial recognition comprising the operations of preprocessing, feature extraction and recognition. Preprocessing may comprise dividing received face images into several subimages, converting the different face image (or subimage) dimensions into a common dimension and/or converting the datatypes of all of the face images (or subimages) into an appropriate datatype. In feature extraction, 2D DMWT is used to extract information from the face images. Application of the 2D DMWT may be followed by FastICA. FastICA, or, in cases where FastICA is not used, 2D DMWT, may be followed by application of the l2-norm and/or eigendecomposition to obtain discriminating and independent features. The resulting independent features are fed into the recognition phase, which may use a neural network, to identify an unknown face image.Type: GrantFiled: July 7, 2016Date of Patent: January 29, 2019Assignee: University of Central Florida Research Foundation, Inc.Inventors: Wasfy Mikhael, Ahmed Aldhahab
-
Patent number: 10192143Abstract: Systems and methods of distinguishing between feature depicted in an image are presented herein. Information defining an image may be obtained. The image may include visual content comprising an array of pixels. The array may include pixel rows. An identification of a pixel row in an image may be obtained. Distances of individual pixels and/or groups of pixels from the identified row of pixels may be determined. Parameter values for a set of pixel parameters of individual pixels of the image may be determined. Based on one or more of the distances from the identified row of pixels, parameter values of one or more pixel parameters, and/or other information, individual pixels and/or groups of pixels may be classified as one of a plurality of image features.Type: GrantFiled: September 20, 2016Date of Patent: January 29, 2019Assignee: GoPro, Inc.Inventors: Vincent Garcia, Maxime Schwab, Francois Lagunas
-
Patent number: 10192144Abstract: When reading a coupon that displays a pattern when a target substance is detected, recognition of the target pattern can be hindered by nonuniform illumination of the coupon. In one aspect, methods and devices are disclosed for uniform illumination of a coupon using a negative axicon lens and a light diffusing assembly. In another separate aspect, methods are disclosed for mathematically compensating for nonuniform illumination of a coupon.Type: GrantFiled: April 13, 2017Date of Patent: January 29, 2019Assignee: RESEARCH INTERNATIONAL, INC.Inventor: Elric Saaski
-
Patent number: 10192145Abstract: A method of providing a set of feature descriptors configured to be used in matching an object in an image of a camera is provided. The method includes: a) providing at least two images of a first object; b) extracting in at least two of the images at least one feature from the respective image, c) providing at least one descriptor for an extracted feature, and storing the descriptors; d) matching descriptors in the first set of descriptors; e) computing a score parameter based on the result of the matching process; f) selecting at least one descriptor based on its score parameter; g) adding the selected descriptor(s) to a second set of descriptors; and h) updating the score parameter of descriptors in the first set based on a selection process and to the result of the matching process.Type: GrantFiled: February 28, 2017Date of Patent: January 29, 2019Assignee: Apple Inc.Inventors: Mohamed Selim Ben Himane, Daniel Kurz, Thomas Olszamowski
-
Patent number: 10192146Abstract: A method of rendering an image includes Monte Carlo rendering a scene to produce a noisy image. The noisy image is processed to render an output image. The processing applies a machine learning model that utilizes colors and/or features from the rendering system for denoising the noisy image and/or to for adaptively placing samples during rendering.Type: GrantFiled: December 13, 2017Date of Patent: January 29, 2019Assignee: The Regents of the University of CaliforniaInventors: Pradeep Sen, Steve Bako, Nima Khademi Kalantari
-
Patent number: 10192147Abstract: Disclosed are an apparatus and a method for detection of foreign substances in a depth sensing system. In one embodiment, a depth sensing device includes a light source to emit light, an image sensor and a processor. The image sensor receives through an optical component the light reflected by environment of the depth sensing device. The image sensor further generates a depth map including a plurality of pixel values corresponding to distances between the depth sensing device and the environment. The processor detects a blurred portion of the depth map due to a presence of a foreign substance on the optical component. The processor may further cause outputting a user alert of the presence of the foreign substance on the optical component.Type: GrantFiled: August 30, 2016Date of Patent: January 29, 2019Assignee: Microsoft Technology Licensing, LLCInventors: Michael Bleyer, Raymond Kirk Price, Jian Zhao, Denis Demandolx, Juan V. Esteve Balducci
-
Patent number: 10192148Abstract: A string of Latin-alphabet based language texts is received and formed a multi-layer 2-D symbol in a computing system. The received string contains at least one word with each word containing at least one letter of the Latin-alphabet based language. 2-D symbol comprises a matrix of N×N pixels of data representing a super-character. The matrix is divided into M×M sub-matrices. Each sub-matrix represents one ideogram formed from the at least one letter contained in a corresponding word in the received string. Ideogram has a square format with a dimension EL letters by EL letters (i.e., row and column). EL is determined from the total number of letters (LL) contained in the corresponding word. EL, LL, N and M are positive integers. Super-character represents a meaning formed from a specific combination of at least one ideogram. Meaning of the super-character is learned with image classification of the 2-D symbol.Type: GrantFiled: September 18, 2018Date of Patent: January 29, 2019Assignee: Gyrfalcon Technology Inc.Inventors: Lin Yang, Patrick Z. Dong, Charles Jin Young, Jason Z. Dong, Baohua Sun
-
Patent number: 10192149Abstract: A remote editing card printing system by using mobile handsets includes a card printer for printing cards having a specific size. The card printer includes a transformer for transferring instructions into machine codes for instructing a printing unit of the card printer to print cards with predetermine drawings or texts on cards; A layout editor installed on the electronic computer device; the layout editor causing a user to input printing instructions or layout instructions through an I/O device of the electronic computer device to edit a layout of a card and thus causing the printing unit to print the cards based on the layout and printing instructions. The layout editor may be installed on an electronic computer device or a cloud device, and an APP is installed on a handset to be connected to the layout editor so that a user may edit the instructions directly on the APP, or the layout editor is installed on the handset directly.Type: GrantFiled: January 25, 2018Date of Patent: January 29, 2019Inventor: Yi-Ming Chen
-
Patent number: 10192150Abstract: A print engine includes a printer module for printing image data in a plurality of different print modes, wherein each print mode has an associated line print time. A data interface receives image data and associated metadata for a print job from a pre-processing system, the metadata including print mode metadata. A digital memory stores a plurality of pulse timing functions, each pulse timing function corresponding to one of the line print times associated with the plurality of print modes. A metadata interpreter interprets the metadata and determines the print mode to be used to print the image data. A printer module controller controls the printer module to print the image data using the pulse timing function corresponding to the line print time associated with the print mode, wherein each light source is activated for a pulse count corresponding to a pixel code value of an associated image pixel.Type: GrantFiled: June 28, 2017Date of Patent: January 29, 2019Assignee: EASTMAN KODAK COMPANYInventors: Chung-Hui Kuo, David R. Brent, Frederick Edward Altrieth, III, Stacy M. Munechika, Richard George Allen