Abstract: A break recommendation method, system, and non-transitory computer readable medium, include detecting a deviation between a current cognitive state of the user and a past cognitive state of the user during a predetermined amount of time for a document type based on a change in an eye gaze movement and a facial and emotional expression and recommending that the user takes a break from viewing the document for a predetermined amount of time based on the deviation being greater than a predetermined threshold value, where the deviation is related to the user viewing the document and the document type of the document.
Type:
Grant
Filed:
February 27, 2019
Date of Patent:
May 18, 2021
Assignee:
INTERNATIONAL BUSINESS MACHINES CORPORATION
Inventors:
Kuntal Dey, Seema Nagar, Sudhanshu Singh, Roman Vaculin
Abstract: The present invention relates to a system and method for detecting a close cut-in vehicle based on free space, wherein the system includes a front camera detecting free space information displaying objects in front of own vehicle and transmitting the information to an electronic control unit, and a cut-in vehicle detection unit selecting a close cut-in vehicle as a control target through a situation analysis by using the free space information inputted from the front camera, and performing a deceleration control in response to the calculated demand acceleration by using a relative speed of the selected control target, whereby a collision risk of ACC (Adaptive Cruise Control) can be reduced and a traveling stability can be enhanced by moving up a recognition time of a close cut-in vehicle using free space information.
Abstract: An embodiment of the invention relates to a method for generating a tilted tomographic X-ray map. In an embodiment, the method includes providing a 3D image data set; determining, based on the 3D image data set, synthetic mammograms corresponding to different angles within the defined projection angle range; determining a point of interest in one of the synthetic mammograms; calculating coordinates of the point of interest in the one synthetic mammogram or the 3D image data set; determining a tilted image plane through the examination object, the tilted image plane including the point of interest and the rotation axis; generating the tilted tomographic X-ray image in the tilted image plane based on the provided 3D image data set; and displaying the tilted tomographic X-ray image.
Abstract: A method and apparatus for determining an interpupillary distance (IPD) are provided. To determine an IPD of a user, three-dimensional (3D) images for candidate IPDs may be generated, and user feedback on the 3D images may be received. A final IPD may be determined based on the user feedback.
Type:
Grant
Filed:
November 15, 2019
Date of Patent:
April 13, 2021
Assignee:
SAMSUNG ELECTRONICS CO., LTD.
Inventors:
Hyoseok Hwang, Dongwoo Kang, Byong Min Kang, Juyong Park, Dong Kyung Nam
Abstract: To allow an observer wearing stereoscopic equipment to perceive a stereo image and an observer not wearing stereoscopic equipment to perceive a clear image. Based on an original image, an image containing phase-modulated components a and an image containing phase-modulated components b are generated. The image containing phase-modulated components a and the image containing phase-modulated components b are for one who sees the original image or a subject represented by the original image and the image containing phase-modulated components a with one eye and sees the original image or the subject and the image containing phase-modulated components b with the other eye to perceive a stereo image, and one who sees the original image or the subject, the image containing phase-modulated components a, and the image containing phase-modulated components b with the same eye(s) to perceive the original image.
Type:
Grant
Filed:
September 5, 2017
Date of Patent:
April 6, 2021
Assignee:
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
Abstract: Various techniques are disclosed for smart surveillance camera systems and methods using thermal imaging to intelligently control illumination and monitoring of a surveillance scene. For example, a smart camera system may include a thermal imager, an IR illuminator, a visible light illuminator, a visible/near IR (NIR) light camera, and a processor. The camera system may capture thermal images of the scene using the thermal imager, and analyze the thermal images to detect a presence and an attribute of an object in the scene. In response to the detection, various light sources may be selectively operated to illuminate the object only when needed or desired, with a suitable type of light source, with a suitable beam angle and width, or in otherwise desirable manner. The visible/NIR light camera may also be selectively operated based on the detection to capture or record surveillance images containing objects of interest.
Type:
Grant
Filed:
June 5, 2017
Date of Patent:
April 6, 2021
Assignee:
FLIR SYSTEMS, INC.
Inventors:
Andrew C. Teich, Nicholas Högasten, Theodore R. Hoelter, Katrin Strandemar
Abstract: The present technology relates to image signal processing. One aspect of the present technology involves analyzing reference imagery gathered by a camera system to determine which parts of an image frame offer high probabilities of—relative to other image parts—containing decodable watermark data. Another aspect of the present technology whittles-down such determined image frame parts based on detected content (e.g., a cereal box) vs expected background within such determined image frame parts.
Abstract: System and method for detecting the authenticity of products by detecting a unique chaotic signature. Photos of the products are taken at the plant and stored in a database/server. The server processes the images to detect for each authentic product a unique authentic signature which is the result of a manufacturing process, a process of nature etc. To detect whether the product is genuine or not at the store, the user/buyer may take a picture of the product and send it to the server (e.g. using an app installed on a portable device or the like). Upon receipt of the photo, the server may process the receive image in search for a pre-detected and/or pre-stored chaotic signature associated with an authentic product. The server may return a response to the user indicating the result of the search. A feedback mechanism may be included to guide the user to take a picture at a specific location of the product where the chaotic signature may exist.
Abstract: In some implementations, a device may detect edges in an image, and may identify, based on the edges, a rectangle that bounds a document in the image. The device may detect lines in the image, and may identify edge candidate lines by discarding one or more of the lines. The device may identify intersection points where lines, included in the edge candidate lines, intersect with one another. The device may identify corner candidate points by discarding one or more points included in the intersection points, and may identify a corner point included in the corner candidate points. The corner point may be a point, included in the corner candidate points, that is closest to one corner of the bounding rectangle. The device may perform perspective correction on the image of the document based on identifying the corner point.
Type:
Grant
Filed:
June 17, 2019
Date of Patent:
March 9, 2021
Assignee:
Capital One Services, LLC
Inventors:
Jason Pribble, Nicholas Capurso, Daniel Alan Jarvis
Abstract: A system and method for invoice field detection and parsing includes the steps of extracting character bounding blocks using optical character recognition (OCR) or digital character extraction (DCE), enhancing the image quality, analyzing the document layout based on imaging techniques, detecting the invoice field based on the machine learning techniques, and parsing the invoice field value based on the content information.
Abstract: One aspect provides a method, including: operating a mobile pipe inspection platform to obtain sensor data for the interior of a pipe; analyzing, using a processor, the sensor data using a trained model, where the trained model is trained using a dataset including sensor data of pipe interiors and one or more of: metadata identifying pipe feature locations contained within the sensor data of the dataset and metadata classifying pipe features contained within the sensor data of the dataset; performing one or more of: identifying, using a processor, a pipe feature location within the sensor data; and classifying, using a processor, a pipe feature of the sensor data; and thereafter producing, using a processor, an output including one or more of an indication of the identifying and an indication of the classifying. Other aspects are described and claimed.
Type:
Grant
Filed:
November 8, 2018
Date of Patent:
March 2, 2021
Assignee:
RedZone Robotics, Inc.
Inventors:
Justin Starr, Galin Konakchiev, Foster J Salotti, Mark Jordan, Nate Alford, Thorin Tobiassen, Todd Kueny, Jason Mizgorski
Abstract: An image calibration method of the present invention is configured to calibrate the position of observation area in motion image which includes image frames. The step of the image calibration method includes: determining the observation area and acquires central position of the observation area in first image frame of the motion image; determining first unique area, which complies with gradient characteristic, in the first image frame; acquiring first vector value from the central positions of the observation area to the first unique area in the first image frame; finding second unique area in the second image frame of the motion image according to the gradient characteristic; acquiring second vector value from the central position of the observation area to the central position of the second unique area in the second image frame; and calibrating position of the observation area in a third image frame according to the difference between the first vector and the second vector.
Abstract: Provided is a light source device including a substrate, a first light emitting element disposed on the substrate and including a first reflective layer, a second light emitting layer configured to emit light of a second wavelength, a first etch stop layer, a first light emitting layer configured to emit light of a first wavelength different from the second wavelength, and a first nanostructure reflective layer, and a second light emitting element disposed on the substrate, spaced apart from the first light emitting element, and comprising a second reflective layer having same material and thickness as the first reflective layer, a third light emitting layer having same material and structure as the second emitting layer and configured to generate light of the second wavelength, a second etch stop layer having same material and thickness as the first etch stop layer, and a second nanostructure reflective layer.
Type:
Grant
Filed:
November 20, 2018
Date of Patent:
February 16, 2021
Assignee:
SAMSUNG ELECTRONICS CO., LTD.
Inventors:
Byunghoon Na, Jangwoo You, Seunghoon Han
Abstract: A method is provided for populating a map with a set of avatars through the use of a mobile technology platform associated with a user. The method (201) includes developing a set of facial characteristics (205), wherein each facial characteristic in the set is associated with one of a plurality of individuals that the user has encountered over a period of time while using the mobile technology platform; recording the locations (207) and times at which each of the plurality of individuals was encountered; forming a first database by associating the recorded times and locations at which each of the plurality of individuals was encountered with the individual's facial characteristics in the set; generating a set of avatars (309) from the set of facial characteristics; and using the first database to populate (319) a map (307) with the set of avatars.
Abstract: A method is provided for estimating the orientation of a user's eyes in a scene in a system-agnostic manner. The method approximates pose-invariant, user-independent feature vectors by transforming the input coordinate to a pose-invariant coordinate and then normalizing the data considering the statistical distributions of previously collected data used to create a learned mapping method. It then uses the learned mapping method to estimate the orientation of the user's eyes in the pose invariant coordinate system, and finalizes by transforming these to a world coordinate system.
Type:
Grant
Filed:
June 21, 2019
Date of Patent:
January 26, 2021
Assignee:
Mirametrix Inc.
Inventors:
Mohamad Kharboutly, Anh Tuan Nghiem, Nicolas Widynski
Abstract: An image processing device is configured to perform enhancement processing on a specific image, using multiple images of types that are different from one another at least one of which is captured at a time different from a time at which other images are captured. The image processing device includes a processor comprising hardware, the processor being configured to execute: acquiring the multiple images; calculating information representing a state of at least one of the multiple images that is used for enhancement; and creating an enhanced image by performing the enhancement processing on an image to be enhanced based on the information representing the state and the multiple images.
Abstract: A method for classifying eye opening data of an occupant's eye in a vehicle, to detect drowsiness/microsleep, including generating a first eye opening data record at a first measuring time in a sliding time window, the first record including a measuring point, representing a first eye opening degree, a first eyelid speed and/or acceleration of motion of the occupant's eye at the first measuring time; acquiring a second eye opening data record at a second measuring time, the second record including at least one acquisition point, representing a second eye opening degree, a second eyelid speed of motion and/or acceleration of motion of the occupant's eye; and executing a cluster analysis, using the measuring point and the acquisition point to assign at least the first and/or second record to a first data cluster, to classify the eye opening data; the first cluster representing an opening state of the occupant's eye.
Abstract: A method includes generating a three-dimensional (3D) surface map associated with a patient from a patient sensor, generating a 3D patient space from the 3D surface map associated with the patient, determining a current pose associated with the patient based on the 3D surface map associated with the patient, comparing the current pose with a desired pose associated with the patient with respect to an imaging system, determining a recommended movement based on the comparison between the current pose and the desired pose, and providing an indication of the recommended movement. The desired pose facilitates imaging of an anatomical feature of the patient by the imaging system and the recommended movement may reposition the patient in the desired pose.
Type:
Grant
Filed:
June 3, 2019
Date of Patent:
January 5, 2021
Assignee:
GENERAL ELECTRIC COMPANY
Inventors:
David Andrew Shoudy, John Eric Tkaczyk, Xin Wang, Heather Chan
Abstract: Methods and systems for detecting and correcting anomalous inputs include training a neural network to embed high-dimensional input data into a low-dimensional space with an embedding that preserves neighbor relationships. Input data items are embedded into the low-dimensional space to form respective low-dimensional codes. An anomaly is determined among the high-dimensional input data based on the low-dimensional codes. The anomaly is corrected.
Type:
Grant
Filed:
April 1, 2019
Date of Patent:
January 5, 2021
Inventors:
Renqiang Min, Farley Lai, Eric Cosatto, Hans Peter Graf
Abstract: Provided is a data recovery device, having: an acquiring unit that acquires photon detection number distribution of an image acquired from an imaging optical system; a recovering unit that acquires an estimated image from the photon detection distribution using a predetermined IPSF (an inverse function of a point spread function PSF); an evaluation value calculating unit that calculates, in relation to each of the estimated image and a plurality of images similar to the estimated image, an evaluation value indicating a likelihood that the image is an actual image; and an outputting unit that generates and outputs a physical parameter with which the evaluation value is at least a significance level.