Patents Issued in August 14, 2018
-
Patent number: 10049259Abstract: Disclosed are a fingerprint module, a method for fabricating the same, and a mobile terminal. The fingerprint module has a fingerprint chip and a circuit board. The fingerprint chip has an identifying surface and a connecting surface opposite to the identifying surface, wherein the identifying surface is configured to identify a fingerprint of a user. The circuit board is attached to the connecting surface, wherein a sealing adhesive is disposed between the circuit board and the fingerprint chip.Type: GrantFiled: December 7, 2017Date of Patent: August 14, 2018Assignee: Guangdong Oppo Mobile Telecommunications Corp., Ltd.Inventor: Wenzhen Zhang
-
Patent number: 10049260Abstract: Embodiments directed towards systems and methods for tracking a human face present within a video stream are described herein. In some embodiments, the exemplary illustrative methods and the exemplary illustrative systems of the present invention are specifically configured to process image data to identify and align the presence of a face in a particular frame.Type: GrantFiled: January 26, 2018Date of Patent: August 14, 2018Assignee: Banuba LimitedInventors: Yury Hushchyn, Aliaksei Sakolski, Alexander Poplavsky
-
Patent number: 10049261Abstract: A method for identify age based on facial features, includes: getting a face image and capturing a face area from the face image; setting a plurality of facial feature points on the face area; defining a plurality of age feature areas on the face area based on coordinates of the plurality of feature points; acquiring age feature from the plurality of age feature area to get an age value; and comparing the age value with at least one threshold value.Type: GrantFiled: May 8, 2016Date of Patent: August 14, 2018Assignee: Cloud Network Technology Singapore Pte. Ltd.Inventor: Ling-Chieh Tai
-
Patent number: 10049262Abstract: A method for extracting a characteristic of a three-dimensional face image includes: performing face area division, to obtain a group of face areas; projecting each face area onto a corresponding regional bounding sphere; obtaining an indication of the corresponding face area according to the regional bounding sphere, and recording the indication as a regional bounding spherical descriptor of the face area; calculating a weight of the regional bounding spherical descriptor of the face area for each face area; and obtaining a characteristic of a three-dimensional face image according to the indication of the face area and the corresponding weight.Type: GrantFiled: July 11, 2016Date of Patent: August 14, 2018Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Yue Ming, Jie Jiang, Tingting Liu, Juhong Wang
-
Patent number: 10049263Abstract: A computer-implemented method of digital image analysis includes obtaining first digital video of a human subject that indicates facial expressions of the human subject; performing micro-expression analysis on the human subject using the first digital video; comparing results of the performed micro-expression analysis with content of a presentation determined to have been provided to the human subject at the same time that particular portions of the digital video were initially captured; and modifying a manner of performing interaction with the human subject or other human subjects based on the comparing of results.Type: GrantFiled: June 15, 2016Date of Patent: August 14, 2018Inventor: Stephan Hau
-
Patent number: 10049264Abstract: According to the present disclosure, a page-turned bound medium is read continuously to acquire images, it is determined whether there is any image without the appearance of a foreign object out of the images corresponding to the same page, when it is determined that there is any image without the appearance of the foreign object, the image without the appearance of the foreign object is acquired as an output image, and when it is determined that there is not any image without the appearance of the foreign object, the images corresponding to the same page are combined to acquire the output image.Type: GrantFiled: April 13, 2016Date of Patent: August 14, 2018Assignee: PFU LIMITEDInventor: Akira Iwayama
-
Patent number: 10049265Abstract: Methods and apparatus to monitor environments are disclosed. An example method includes triggering a two-dimensional recognition analysis, in connection with a first frame of data, on two-dimensional data points representative of an object detected in an environment, the triggering based on satisfying a first trigger event, the first trigger event one of (1) a distance between the object and a sensor of the environment satisfying a threshold distance, or (2) elapsing of a time interval. In response to determining that the object is recognized as a person in the first frame, triggering the two-dimensional recognition analysis in connection with a second frame, the second frame subsequent to the first frame, the two-dimensional recognition analysis of the second frame performed on two-dimensional data points representative of a location in the second frame corresponding to the location of the person in the first frame.Type: GrantFiled: November 18, 2016Date of Patent: August 14, 2018Assignee: The Nielsen Company (US), LLCInventors: Morris Lee, Alejandro Terrazas
-
Patent number: 10049266Abstract: Described are apparatuses, methods and storage media associated with detecting and counting people, including use of RGB and range cameras with overlapping fields of view and methods which count people in range camera stream and which characterize behavior as recognized in RGB stream.Type: GrantFiled: September 25, 2015Date of Patent: August 14, 2018Assignee: Intel CorporationInventors: Michael Wu, Addicam V. Sanjay
-
Patent number: 10049267Abstract: The novel technology described in this disclosure includes an example method comprising capturing sensor data using one or more sensors describing a particular environment; processing the sensor data using one or more computing devices coupled to the one or more sensors to detect a participant within the environment; determining a location of the participant within the environment; querying a feature database populated with a multiplicity of features extracted from the environment using the location of the participant for one or more features being located proximate the location of the participant; and selecting, using the one or more computing devices, a scene type from among a plurality of predetermined scene types based on association likelihood values describing probabilities of each feature of the one or more features being located within the scene types.Type: GrantFiled: February 29, 2016Date of Patent: August 14, 2018Assignee: TOYOTA JIDOSHA KABUSHIKI KAISHAInventors: Eric Martinson, David Kim, Yusuke Nakano
-
Patent number: 10049268Abstract: A method includes: displaying a digital image on a first portion of a display of a mobile device; receiving user feedback via the display of the mobile device; analyzing the user feedback to determine a meaning of the user feedback; based on the determined meaning of the user feedback, analyzing a portion of the digital image corresponding to either the point of interest or the region of interest to detect one or more connected components depicted within the portion of the digital image; classifying each detected connected component depicted within the portion of the digital image; estimating an identity of each detected connected component based on the classification of the detected connected component; and one or more of: displaying the identity of each detected connected component on a second portion of the display of the mobile device; and providing the identity of each detected connected component to a workflow.Type: GrantFiled: March 2, 2016Date of Patent: August 14, 2018Assignee: KOFAX, INC.Inventors: Anthony Macciola, Alexander Shustorovich, Christopher W. Thrasher, Jan W. Amtrup
-
Patent number: 10049269Abstract: An information processing apparatus includes an acquiring unit, an extraction unit, and a selection unit. The acquiring unit acquires, for multiple documents, candidates for elements representing characteristics of each of the multiple documents. The extraction unit extracts, from the candidates acquired by the acquiring unit, common elements common to two or more of the multiple documents. The selection unit extracts, from the multiple documents, a document including two or more common elements among the common elements, and determines the two or more common elements included in the extracted document to be elements representing characteristics of the document.Type: GrantFiled: April 13, 2016Date of Patent: August 14, 2018Assignee: FUJI XEROX CO., LTD.Inventors: Nobuyuki Shigeeda, Yozo Kashima
-
Patent number: 10049270Abstract: A method, computer system, and a computer program product for identifying sections in a document based on a plurality of visual features is provided. The present invention may include receiving a plurality of documents. The present invention may also include extracting a plurality of content blocks. The present invention may further include determining the plurality of visual features. The present invention may then include grouping the extracted plurality of content blocks into a plurality of categories. The present invention may also include generating a plurality of closeness scores for the plurality of categories by utilizing a Visual Similarity Measure. The present invention may further include generating a plurality of Association Matrices on the plurality of categories for each of the received plurality of documents based on the Visual Similarity Measure. The present invention may further include merging the plurality of categories into a plurality of clusters.Type: GrantFiled: December 29, 2017Date of Patent: August 14, 2018Assignee: International Business Machines CorporationInventors: Lalit Agarwalla, Rizwan Dudekula, Purushothaman K. Narayanan, Sujoy Sett
-
Patent number: 10049271Abstract: An authentication system controlled by eye open and eye closed state and a handheld control apparatus thereof are provided. The handheld control apparatus includes a housing case, an image capturing unit and a processing unit. The housing case has a window and is suitable for a user to hold. The image capturing unit is disposed in the housing case and captures an eye area of the user through the window to obtain an image sequence. The processing unit is coupled to the image capturing unit and analyzes the image sequence to obtain eye image information of the eye area of the user. The processing unit detects an eye-open state and an eye-closed state of the user based on the eye image information, converts a plurality of the eye-open states and the eye-closed states into a blink code, and accordingly generates a control command to control a security equipment.Type: GrantFiled: October 9, 2014Date of Patent: August 14, 2018Assignee: UTECHZONE CO., LTD.Inventors: Chia-Chun Tsou, Chia-We Hsu
-
Patent number: 10049272Abstract: Examples are disclosed herein that relate to user authentication. One example provides a biometric identification system comprising an iris illuminator, an image sensor configured to capture light reflected from irises of a user as a result of those irises being illuminated by the iris illuminator, a drive circuit configured to drive the iris illuminator in a first mode and a second mode that each cause the irises to be illuminated differently, the first and second modes thereby yielding a first mode output at the image sensor and a second mode output at the image sensor, respectively, and a processor configured to process at least one of the first mode output and the second mode output and, in response to such processing, select one of the first mode and the second mode for use in performing an iris authentication on the user.Type: GrantFiled: September 24, 2015Date of Patent: August 14, 2018Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Mudit Agrawal, Karlton David Powell, Christopher Maurice Mei
-
Patent number: 10049273Abstract: An image recognition apparatus in an embodiment includes a feature-value calculating section configured to calculate a feature value in a region of interest segmented from an image, a likelihood calculating section configured to calculate likelihood of an object present in the region of interest referencing the feature value including a plurality of feature value elements and dictionary data including a plurality of dictionary elements, and a dictionary control section configured to acquire the dictionary element corresponding to the feature value element exceeding a set value.Type: GrantFiled: June 12, 2015Date of Patent: August 14, 2018Assignee: Kabushiki Kaisha ToshibaInventor: Toru Sano
-
Patent number: 10049274Abstract: Methods and systems for providing earth observation (EO) data and analytics are provided. An example method may include providing EO images of geographical areas of a pre-determined size. The EO images can be associated with geographical coordinates and an EO data type. The method may include providing a user interface to configure a use case query. The use case query may include a use case geographical area and a use case EO data type. The method may include determining, based on the use case query, a subset of the EO images overlapping with the use case geographical area and associated with the use case EO data type. The method may include generating, by the analysis module and based on the subset of the EO images, a resulting EO image corresponding to the use case geographical area and displaying, via a graphic user interface, the resulting EO image.Type: GrantFiled: March 27, 2018Date of Patent: August 14, 2018Assignee: EOS DATA ANALYTICS, INC.Inventor: Maxym Polyakov
-
Patent number: 10049275Abstract: The present disclosure relates generally to multicomponent optical devices having a space within the device. In various embodiments, an optical device comprises a first posterior component having an anterior surface, a posterior support component, and an anterior component having a posterior surface. An optical device can also comprise an anterior skirt. The first posterior component and the anterior skirt can comprise gas-permeable optical materials. An optical device also comprises a primary space between the posterior surface and the anterior surface, with the primary space configured to permit diffusion of a gas from a perimeter of the primary space through the space and across the anterior surface of the first posterior component. A method of forming a multicomponent optical device having a space is also provided.Type: GrantFiled: September 12, 2016Date of Patent: August 14, 2018Assignee: PARAGON CRT COMPANY LLCInventors: Joseph Sicari, William E. Meyers
-
Patent number: 10049276Abstract: Embodiments are directed toward analyzing images of cables and electronic devices to augment those images with information relating to the installation or troubleshooting of such cables and electronic devices. The images are analyzed to determine non-text characteristics of a connector of the cable and non-text characteristics of at least one port on the electronic device. These non-text characteristics can be compared to each other to determine if the connector is compatible with one of the ports on the electronic device. Similarly, these non-text characteristics can be compared with non-text characteristics of known connectors and ports to determine a type of the connector and a type of the ports on the electronic device, which is used to determine their compatibility. The images are then modified or overlaid with information identifying the type of connector or port, their compatibility or lack thereof, or instructions for connecting the compatible connector and port.Type: GrantFiled: February 28, 2017Date of Patent: August 14, 2018Assignee: DISH Technologies L.L.C.Inventors: Leslie Ann Harper, John Card, II
-
Patent number: 10049277Abstract: A method and apparatus for tracking an object, and a method and apparatus for calculating object pose information are provided. The method of tracking the object obtains object feature point candidates by using a difference between pixel values of neighboring frames. A template matching process is performed in a predetermined region having the object feature point candidates as the center. Accordingly, it is possible to reduce a processing time needed for the template matching process. The method of tracking the object is robust in terms of sudden changes in lighting and partial occlusion. In addition, it is possible to track the object in real time. In addition, since the pose of the object, the pattern of the object, and the occlusion of the object are determined, detailed information on action patterns of the object can be obtained in real time.Type: GrantFiled: December 12, 2014Date of Patent: August 14, 2018Assignee: Samsung Electronics Co., Ltd.Inventors: Jung-bae Kim, Haitao Wang
-
Patent number: 10049278Abstract: A system for remote care of an animal includes a robotic animal caregiver that includes a housing, a wireless data communication system disposed within the housing and wirelessly communicatively coupled with an external data communications system, and a microprocessor in communication with the wireless data communication system disposed within the housing. The system further includes a smart collar to be worn by the animal operable to determine a geo-location and behavior information of the animal and communicate with the microprocessor.Type: GrantFiled: November 30, 2016Date of Patent: August 14, 2018Assignee: Botsitter, LLCInventors: Krystalka R. Womble, Robert L. Piccioni
-
Patent number: 10049279Abstract: A method of predicting action labels for a video stream includes receiving the video stream and calculating an optical flow of consecutive frames of the video stream. An attention map is generated from the current frame of the video stream and the calculated optical flow. An action label is predicted for the current frame based on the optical flow, a previous hidden state and the attention map.Type: GrantFiled: September 16, 2016Date of Patent: August 14, 2018Inventors: Zhenyang Li, Efstratios Gavves, Mihir Jain, Cornelis Gerardus Maria Snoek
-
Patent number: 10049280Abstract: Various arrangements for assessing an installation of a smart home device are presented. A video camera device may capture video indicative of a location of the smart home device. The video indicative of the location of the smart home device may be analyzed to determine whether the location of the smart home device prevents the smart home device from operating within specification. An indication may then be output indicative of whether the location of the smart home device prevents the smart home device from operating within specification.Type: GrantFiled: March 7, 2017Date of Patent: August 14, 2018Assignee: Google LLCInventors: David Sloo, Nick Webb, Yoky Matsuoka, Anthony Michael Fadell, Matthew Lee Rogers
-
Patent number: 10049281Abstract: A method and system for measuring and reacting to human interaction with elements in a space, such as public places (retail stores, showrooms, etc.) is disclosed which may determine information about an interaction of a three dimensional object of interest within a three dimensional zone of interest with a point cloud 3D scanner having an image frame generator generating a point cloud 3D scanner frame comprising an array of depth coordinates for respective two dimensional coordinates of at least part of a surface of the object of interest, within the three dimensional zone of interest, comprising a three dimensional coverage zone encompassing a three dimensional engagement zone and a computing comparing respective frames to determine the time and location of a collision between the object of interest and a surface of at least one of the three dimensional coverage zone or the three dimensional engagement zone encompassed by the dimensional coverage zone.Type: GrantFiled: November 11, 2013Date of Patent: August 14, 2018Assignee: Shopperception, Inc.Inventors: Raul Ignacio Verano, Ariel Alejandro Di Stefano, Juan Ignacio Porta
-
Patent number: 10049282Abstract: In a train interior monitoring method, in order to improve the efficiency in memory-usage, when an event occurs, data containing image information, which has been recorded in a temporary image memory, is recorded in an image memory without any processing in order to reduce the file size of the data. Further, data containing image information is recorded in the image memory represents for only a period of time before and after detection of the occurrence of the event, the period of time being set according to what the event is. After a given period of time has elapsed, the data containing the image information is processed in order to reduce its file size and then recorded in the image memory.Type: GrantFiled: August 6, 2013Date of Patent: August 14, 2018Assignee: MITSUBISHI ELECTRIC CORPORATIONInventor: Kenichi Ishiguri
-
Patent number: 10049283Abstract: Provided is a stay condition analyzing apparatus including a stay information acquirer which acquires stay information for each predetermined measurement period of time on the basis of positional information of a moving object which is acquired from a captured image of a target area, a heat map image generator which generates a heat map image obtained by visualizing the stay information, a background image generator which generates a background image from the captured image, and a display image generator which generates a display image by superimposing the heat map image on the background image. The background image generator generates the background image by performing image processing for reducing discriminability of the moving object appearing in the captured image on the captured image.Type: GrantFiled: March 25, 2015Date of Patent: August 14, 2018Assignee: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.Inventors: Yuichi Matsumoto, Hiroaki Yoshio, Youichi Gouda
-
Patent number: 10049284Abstract: A method is disclosed for using a camera on-board a vehicle to determine whether precipitation is failing near the vehicle. The method may include obtaining multiple images. Each of the multiple images may be known to photographically depict a “rain” or a “no rain” condition. An artificial neural network may be trained on the multiple images. Later, the artificial neural network may analyze one or more images captured by a first camera secured to a first vehicle. Based on that analysis, the artificial neural network may classify the first vehicle as being in “rain” or “no rain” weather.Type: GrantFiled: April 11, 2016Date of Patent: August 14, 2018Assignee: Ford Global TechnologiesInventors: Jinesh J Jain, Harpreetsingh Banvait, Ashley Elizabeth Micks, Vidya Nariyambut Murali
-
Patent number: 10049285Abstract: A vehicular control system includes a camera and a control having an image processor that processes captured image data to determine an object present in the forward field of view of the camera. The control is operable to determine an estimated time to arrival of another vehicle at a location that is in the projected path of travel of the equipped vehicle. Responsive to the received information being indicative of the state of a signal light at an intersection being green and responsive at least in part to (i) determination that the estimated time to arrival of the other vehicle is at least a threshold amount and (ii) determination that an object is not present in the projected path of travel of the equipped vehicle, the control may determine that it is safe for the equipped vehicle to proceed along the projected path of travel.Type: GrantFiled: August 21, 2017Date of Patent: August 14, 2018Assignee: MAGNA ELECTRONICS INC.Inventors: Rohan J. Divekar, Paul A. VanOphem
-
Patent number: 10049286Abstract: A method, system, and computer program product to perform image-based estimation of a risk of a vehicle having a specified status include receiving images from one or more cameras, obtaining one or more vehicle images of the vehicle from the image, classifying the vehicle based on the one or more vehicle images to determine a vehicle classification, extracting features from the one or more vehicle images based on the vehicle classification, and comparing the features with risk indicators to determine estimation of the risk. Instructions are provided for an action based on the risk.Type: GrantFiled: December 15, 2015Date of Patent: August 14, 2018Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Rahil Garnavi, Timothy M. Lynar, Suman Sedai, John M. Wagner
-
Patent number: 10049287Abstract: Disclosed are systems and methods for improving interactions with and between computers in an authentication system supported by or configured with authentication servers or platforms. The systems interact to identify access and retrieve data across platforms, which data can be used to improve the quality of results data used in processing interactions between or among processors in such systems. The disclosed anti-spoofing systems and methods provide improved functionality to facial recognition systems by enabling enhanced “spoof” (or attempts to impersonate a user) detection while authenticating a user. The disclosed systems and method provide additional functionality to existing facial recognition systems that enables such systems to actually determine whether the image being captured and/or recorded is that of an actual person, as opposed to a non-human representation.Type: GrantFiled: May 22, 2015Date of Patent: August 14, 2018Assignee: OATH INC.Inventors: Christian Holz, Miteshkumar Patel, Senaka Wimal Buthpitiya
-
Patent number: 10049288Abstract: A managed notification system compares image(s) and/or indicia relating to the image(s) and where there is a match selectively provides a notification of the same.Type: GrantFiled: July 28, 2016Date of Patent: August 14, 2018Assignee: FaceFirst, Inc.Inventors: Joseph Ethan Rosenkrantz, Gifford Hesketh
-
Patent number: 10049289Abstract: A computer-implemented method selectively outputs two types of vector data representative of user-input strokes. Type one stroke objects are generated in a device including a position input sensor, on which a user operates a pointer to generate a type one stroke object representative of a stroke. A stroke starts at a pen-down time at which the pointer is placed on the position input sensor and ends at a pen-up time at which the pointer is removed therefrom. Real-time rendering of a type one stroke object is started after the pen-down time of a stroke without waiting for the pen-up time. After completion of a type one stroke object through its pen-up time, the type one stroke object is converted to a type two stroke object, which is a set of curves defining a boundary of the stroke, and can be exported as a file or rendered on a display.Type: GrantFiled: December 27, 2016Date of Patent: August 14, 2018Assignee: Wacom Co., Ltd.Inventors: Plamen Petkov, Branimir Angelov
-
Patent number: 10049290Abstract: An industrial vehicle positioning system and method are presented. The system includes a first imaging subsystem for acquiring a first indicia image and a second imaging subsystem for acquiring a second indicia image. An image analysis subsystem is configured for analyzing the first indicia image to acquire a first location designation, and for analyzing the second indicia image to acquire a second location designation. A processor is configured for determining the location of the industrial vehicle based upon the first location designation and the second location designation.Type: GrantFiled: December 29, 2015Date of Patent: August 14, 2018Assignee: Hand Held Products, Inc.Inventors: James Chamberlin, Manjunatha Aswathanarayana Swamy, Praveen Issac
-
Patent number: 10049291Abstract: According to the present disclosure, an image-processing apparatus identifies for each gradation value a connected component of pixels of not less than or not more than the gradation value neighboring and connected to each other in an input image, thereby generating hierarchical structure data of a hierarchical structure including the connected component, extracts based on the hierarchical structure data a connected component satisfying character likelihood as a character-like region, acquires a threshold value of binarization used exclusively for the character-like region, acquires a corrected region where the character-like region is binarized, acquires a background where a gradation value of a pixel included in a region of the input image other than the corrected region is changed to a gradation value for a background, and acquires a binary image data of a binary image composed of the corrected region and the background region.Type: GrantFiled: November 17, 2016Date of Patent: August 14, 2018Assignee: PFU LIMITEDInventors: Mitsuru Nishikawa, Kiyoto Kosaka
-
Patent number: 10049292Abstract: Tools are provided including intelligent provisions to perform processing of mail at a mailcenter that services plural mail service customers, such as, for example, adapted based on metrics and analytics derived from previous mail processing.Type: GrantFiled: September 29, 2016Date of Patent: August 14, 2018Assignee: RICOH COMPANY, LTD.Inventor: Dale Walsh
-
Patent number: 10049293Abstract: Techniques are disclosed for extracting micro-features at a pixel-level based on characteristics of one or more images. Importantly, the extraction is unsupervised, i.e., performed independent of any training data that defines particular objects, allowing a behavior-recognition system to forgo a training phase and for object classification to proceed without being constrained by specific object definitions. A micro-feature extractor that does not require training data is adaptive and self-trains while performing the extraction. The extracted micro-features are represented as a micro-feature vector that may be input to a micro-classifier which groups objects into object type clusters based on the micro-feature vectors.Type: GrantFiled: March 16, 2017Date of Patent: August 14, 2018Assignee: Omni Al, Inc.Inventors: Wesley Kenneth Cobb, Rajkiran K. Gottumukkal, Kishor Adinath Saitwal, Ming-Jung Seow, Gang Xu, Lon W. Risinger, Jeff Graham
-
Patent number: 10049294Abstract: The present disclosure advantageously provides apparatus, systems and methods which facilitate estimating and accounting for illumination conditions, viewing conditions and reflectance characteristics for imaged surfaces when performing color measurement, correction and/or transformation in an imaging process, such as photography. Advantageously, the disclosed apparatus, systems and methods may utilize a set of one or more illumination target elements for extrapolating illumination conditions from an imaged scene. The disclosure may be used to improve determination of color correction/transformation parameters and/or to facilitate determining a reflectance model for a target surface of interest.Type: GrantFiled: January 30, 2015Date of Patent: August 14, 2018Assignee: X-Rite Switzerland GmbHInventors: James William Vogh, Jr., Olivier Calas, Beat Frick
-
Patent number: 10049295Abstract: Methods and systems detect changes occurring over time between synthetic aperture sonar (SAS) images. A processor performs coarse navigational alignment, fine-scale co-registration and local co-registration between current image data and historical image data. Local co-registration includes obtaining correlation peaks for large neighborhood non-overlapping patches. Relative patch translations are estimated and parameterized into error vectors. Interpolation functions formed from the vectors re-map the current image onto the same grid as the historical image and the complex correlation coefficient between images is calculated. The resulting interferogram is decomposed into surge and sway functions used to define the argument of a phase function, which is multiplied by the current image to remove the effects of surge and sway on the interferogram. Based on the aforementioned computations, a canonical correlation analysis is performed to detect scene changes between the historical and new SAS images.Type: GrantFiled: August 12, 2016Date of Patent: August 14, 2018Assignee: The United States of America as represented by the Secretary of the NavyInventors: Tesfaye G-Michael, Daniel Sternlicht, Bradley Marchand, James Derek Tucker, Timothy M. Marston
-
Patent number: 10049296Abstract: A grain loss sensor array system is provided for an agricultural harvester. At least one thermal sensing device is attached to a header of the agricultural harvester and captures infrared images or video of the ground. A controller detects pre-harvest loss and harvest loss using the infrared images or video by recognizing a temperature difference or a characteristic thermal difference between the pre-harvest loss, the harvest loss, and the ground. The controller may communicate with or be integrated with a yield monitor to provide information concerning the pre-harvest loss and harvest loss to an operator of the agricultural harvester.Type: GrantFiled: August 17, 2016Date of Patent: August 14, 2018Assignee: CNH Industrial America LLCInventor: Eric L. Walker
-
Patent number: 10049297Abstract: The invention provides a data driven method for transferring indoor scene layout and color style, including: preprocessing images in an indoor image data set, which includes manually labeling semantic information and layout information; obtaining indoor layout and color rules on the data set by learning algorithms; performing object-level semantic segmentation on input indoor reference image, or performing object-level and component-level segmentations using color segmentation methods, to extract layout constraints and color constraints of reference images, associating the reference images with indoor 3D scene via the semantic information; constructing a graph model for indoor reference image scene and indoor 3D scene to express indoor scene layout and color; performing similarity measurement on the indoor scene and searching for similar images in the data set to obtain an image sequence with gradient layouts from reference images to input 3D scene; performing image-sequence-guided layout and color transfer gType: GrantFiled: March 20, 2017Date of Patent: August 14, 2018Assignee: BEIHANG UNIVERSITYInventors: Xiaowu Chen, Jianwei Li, Qing Li, Dongqing Zou, Bo Gao, Qinping Zhao
-
Patent number: 10049298Abstract: An image management system includes a controller and one or more analysis processors. The controller is configured to receive search parameters that specify at least one of operational data or a range of operational data of one or more vehicle systems. The one or more analysis processors are configured to search remotely stored image data based on the search parameters to identify matching image data. The remotely stored image data was obtained by one or more imaging systems disposed onboard the one or more vehicle systems, and are associated with the operational data of the one or more vehicle systems that was current when the remotely stored image data was acquired. The one or more analysis processors also are configured to obtain the matching image data having the operational data specified by the search parameters and to present the matching image data to an operator.Type: GrantFiled: September 12, 2014Date of Patent: August 14, 2018Assignee: General Electric CompanyInventors: Mark Bradshaw Kraeling, Anwarul Azam, Matthew Lawrence Blair, Shannon Joseph Clouse
-
Patent number: 10049299Abstract: The invention discloses a deep learning based method for three dimensional (3D) model triangular facet feature learning and classifying and an apparatus. The method includes: constructing a deep convolutional neural network (CNN) feature learning model; training the deep CNN feature learning model; extracting a feature from, and constructing a feature vector for, a 3D model triangular facet having no class label, and reconstructing a feature in the constructed feature vector using a bag-of-words algorithm; determining an output feature corresponding to the 3D model triangular facet having no class label according to the trained deep CNN feature learning model and an initial feature corresponding to the 3D model triangular facet having no class label; and performing classification. The method enhances the capability to describe 3D model triangular facets, thereby ensuring the accuracy of 3D model triangular facet feature learning and classifying results.Type: GrantFiled: February 22, 2017Date of Patent: August 14, 2018Assignee: BEIHANG UNIVERSITYInventors: Xiaowu Chen, Kan Guo, Dongqing Zou, Qinping Zhao
-
Patent number: 10049300Abstract: Systems and methods of generating a compact visual vocabulary are provided. Descriptor sets related to digital representations of objects are obtained, clustered and partitioned into cells of a descriptor space, and a representative descriptor and index are associated with each cell. Generated visual vocabularies could be stored in client-side devices and used to obtain content information related to objects of interest that are captured.Type: GrantFiled: February 27, 2018Date of Patent: August 14, 2018Assignee: Nant Holdings IP, LLCInventors: Bing Song, David McKinnon
-
Patent number: 10049301Abstract: A computer-implemented method for identifying an optimal set of parameters for medical image acquisition includes receiving a set of input parameters corresponding to a medical imaging scan of a patient and using a model of operator parameter selection to determine a set of optimal target parameter values for a medical image scanner based on the set of input parameters. The medical imaging scan of the patient is performed using the set of optimal target parameter values to acquire one or more images and feedback is collected from one or more users in response to acquisition of the one or more images. This feedback is used to update the model of operator parameter selection, thereby yielding an updated model of operator parameter selection.Type: GrantFiled: August 1, 2016Date of Patent: August 14, 2018Assignee: Siemens Healthcare GmbHInventors: Stefan Kluckner, Dorin Comaniciu
-
Patent number: 10049302Abstract: A computing device trains models for streaming classification. A baseline penalty value is computed that is inversely proportional to a square of a maximum explanatory variable value. A set of penalty values is computed based on the baseline penalty value. For each penalty value of the set of penalty values, a classification type model is trained using the respective penalty value and the observation vectors to compute parameters that define a trained model, the classification type model is validated using the respective penalty value and the observation vectors to compute a validation criterion value that quantifies a validation error, and the validation criterion value, the respective penalty value, and the parameters that define a trained model are stored to the computer-readable medium. The classification type model is trained to predict the response variable value of each observation vector based on the respective explanatory variable value of each observation vector.Type: GrantFiled: March 5, 2018Date of Patent: August 14, 2018Assignee: SAS Institute Inc.Inventors: Jun Liu, Yan Xu, Joshua David Griffin, Manoj Keshavmurthi Chari
-
Patent number: 10049303Abstract: Methods and a system for identifying reflective surfaces in a scene are provided herein. The system may include a sensing device configured to capture a scene. The system may further include a storage device configured to store three-dimensional positions of at least some of the objects in the scene. The system may further include a computer processor configured to attempt to obtain a reflective surface representation for one or more candidate surfaces selected from the surfaces in the scene. In a case that the attempted obtaining is successful, computer processor is further configured to determine that the candidate reflective surface is indeed a reflective surface defined by the obtained surface representation. According to some embodiments of the present invention, in a case the attempted calculation is unsuccessful, determining that the recognized portion of the object is an object that is independent of the stored objects.Type: GrantFiled: October 1, 2015Date of Patent: August 14, 2018Assignee: Infinity Augmented Reality Israel Ltd.Inventors: Matan Protter, Motti Kushnir, Felix Goldberg
-
Patent number: 10049304Abstract: A method and system for detecting occupancy in a space use computer vision techniques. In one embodiment an object is detected in an image of the space. If the object is detected in a first area of the image, a shape of the object is determined based on a first shape feature of the object and if the object is detected in a second area of the image, the shape of the object is determined based on a second shape feature of the object. The object may be determined to be an occupant based on the determined shape of the object.Type: GrantFiled: January 10, 2017Date of Patent: August 14, 2018Assignee: POINTGRAB LTD.Inventors: Yonatan Hyatt, Benjamin Neeman, Jonathan Laserson
-
Patent number: 10049305Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for classification using a neural network. One of the methods for processing an input through each of multiple layers of a neural network to generate an output, wherein each of the multiple layers of the neural network includes a respective multiple nodes includes for a particular layer of the multiple layers: receiving, by a classification system, an activation vector as input for the particular layer, selecting one or more nodes in the particular layer using the activation vector and a hash table that maps numeric values to nodes in the particular layer, and processing the activation vector using the selected nodes to generate an output for the particular layer.Type: GrantFiled: July 21, 2017Date of Patent: August 14, 2018Assignee: Google LLCInventors: Sudheendra Vijayanarasimhan, Jay Yagnik
-
Patent number: 10049306Abstract: Aspects of the present disclosure involve a system and method for learning from images of transactional data. In one embodiment, a system is introduced that can learn from the images of transactional data. In particular, machine learning is implemented on images in order to classify information in a more accurate manner. The images are created from raw data deriving from a user account.Type: GrantFiled: December 29, 2016Date of Patent: August 14, 2018Assignee: PAYPAL, INC.Inventors: Lian Liu, Hui-Min Chen
-
Patent number: 10049307Abstract: Technical solutions are described for training an object-recognition neural network that identifies an object in a computer-readable image. An example method includes assigning a first neural network for determining a visual alignment model of the images for determining a normalized alignment of the object. The method further includes assigning a second neural network for determining a visual representation model of the images for recognizing the object. The method further includes determining the visual alignment model by training the first neural network and determining the visual representation model by training the second neural network independent of the first. The method further includes determining a combined object recognition model by training a combination of the first neural network and the second neural network. The method further includes recognizing the object in the image based on the combined object recognition model by passing the image through each of the neural networks.Type: GrantFiled: April 4, 2016Date of Patent: August 14, 2018Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Sharathchandra U. Pankanti, Xi Peng, Nalini K. Ratha
-
Patent number: 10049308Abstract: Training images can be synthesized in order to obtain enough data to train a convolutional neural network to recognize various classes of a type of item. Images can be synthesized by blending images of items labeled using those classes into selected background images. Catalog images can represent items against a solid background, which can be identified using connected components or other such approaches. Removing the background using such approaches can result in edge artifacts proximate the item region. To improve the results, one or more operations are performed, such as a morphological erosion operation followed by an opening operation. The isolated item portion then can be blended into a randomly selected background region in order to generate a synthesized training image. The training images can be used with real world images to train the neural network.Type: GrantFiled: February 21, 2017Date of Patent: August 14, 2018Assignee: A9.com, Inc.Inventors: Arnab Sanat Kumar Dhua, Ming Du, Aishwarya Natesh