Patents Issued in March 7, 2019
  • Publication number: 20190073517
    Abstract: An image processing device includes an image processing unit that performs image processing on an observed image in which a cell is imaged and an image processing method selector that is configured to determine an observed image processing method for analyzing the imaged cell on the basis of information of a processed image obtained through image processing of the image processing unit.
    Type: Application
    Filed: September 7, 2018
    Publication date: March 7, 2019
    Inventors: Hiroaki KII, Yasujiro KIYOTA, Takayuki UOZUMI, Yoichi YAMAZAKI
  • Publication number: 20190073518
    Abstract: Described are techniques for indoor mapping and navigation. A reference mobile device including sensors to capture range, depth and position data and processes such data. The reference mobile device further includes a processor that is configured to process the captured data to generate a 2D or 3D mapping of localization information of the device that is rendered on a display unit, execute an object recognition to identify types of installed devices of interest of interest in a part of the 2D or 3D device mapping, integrate the 3D device mapping in the built environment to objects in the environment through capturing point cloud data along with 2D image or video frame data of the build environment.
    Type: Application
    Filed: November 6, 2018
    Publication date: March 7, 2019
    Applicant: Tyco Fire & Security GmbH
    Inventors: Manjuprakash Rama Rao, Rambabu Chinta, Surajit Borah
  • Publication number: 20190073519
    Abstract: A computing device determines, in a first frame, a first location of a feature point and also determines in a second frame, a second location of the feature point. The computing device generates a motion vector for the feature point in the first frame and relocates the first location in the first frame to a first refined location based on the motion vector. The computing device generates a smoothed location of the feature point in the second frame based on the refined location and the second location of the feature point in the second frame.
    Type: Application
    Filed: August 27, 2018
    Publication date: March 7, 2019
    Inventors: KaiKai (Clay) Hsu, Chih-Yu (Detta) Cheng
  • Publication number: 20190073520
    Abstract: This description describes a system for identifying individuals within a digital file. The system accesses a digital file describing the movement of unidentified individuals and detects a face for an unidentified individual at a plurality of locations in the video. The system divides the digital file into a set of segments and detects a face of an unidentified individual by applying a detection algorithm to each segment. For each detected face, the system applies a recognition algorithm to extract feature vectors representative of the identity of the detected faces which are stored in computer memory. The system applies a recognition algorithm to query the extracted feature vectors for target individuals by matching unidentified individuals to target individuals, determining a confidence level describing the likelihood that the match is correct, and generating a report to be presented to a user of the system.
    Type: Application
    Filed: August 31, 2018
    Publication date: March 7, 2019
    Inventors: Balan Rama Ayyar, Anantha Krishnan Bangalore, Jerome Francois Berclaz, Reechik Chatterjee, Nikhil Kumar Gupta, Ivan Kovtun, Vasudev Parameswaran, Timo Pekka Pylvaenaeinen, Rajendra Jayantilal Shah
  • Publication number: 20190073521
    Abstract: An auxiliary filtering device for face recognition is provided. The auxiliary filtering device is used to exclude an ineligible object to be identified according to the relative relationship between object distances and image sizes, the image variation with time and/or the feature difference between images captured by different cameras to prevent the possibility of cracking the face recognition by using a photo or a video.
    Type: Application
    Filed: September 6, 2017
    Publication date: March 7, 2019
    Inventor: En-Feng HSU
  • Publication number: 20190073522
    Abstract: One of the aspects of the present invention discloses a feature point detection method. The method comprises: acquiring a face region in an input image; acquiring first positions of first feature points and second feature points according to a pre-generated first model; estimating second positions of the first feature points according to the first positions of the first feature points and pre-generated second models; detecting third positions of the first feature points and the second feature points according to the second positions of the first feature points, the first positions of the second feature points and pre-generated third models. According to the present invention, the final detected face shape could approach to the actual face shape much more.
    Type: Application
    Filed: February 22, 2017
    Publication date: March 7, 2019
    Inventors: Dongyue Zhao, Yaohai Huang, Xian Li
  • Publication number: 20190073523
    Abstract: A system and method for detecting subliminal facial responses of a human subject to subliminal stimuli. The method includes: receiving captured first facial response data approximately time-locked with a presentation of subliminal target stimuli to a plurality of human subjects; receiving captured second facial response data approximately time-locked with a presentation of subliminal foil stimuli to the plurality of human subjects; receiving captured unidentified facial response data to a subliminal stimulus from the target human subject; determining a target probability measure that the unidentified facial response data of the target human subject is in response to the subliminal target stimuli using a machine learning model trained with a subliminal response training set, the subliminal response training set comprising the first captured facial response data and the captured second facial response data; and outputting the target probability measure.
    Type: Application
    Filed: November 14, 2017
    Publication date: March 7, 2019
    Inventors: Kang LEE, Pu ZHENG, Marzio POZZUOLI
  • Publication number: 20190073524
    Abstract: A method for predicting walking behaviors includes: encoding walking behavior information of at least one target object in a target scene within a historical time period M to obtain a first offset matrix for representing the walking behavior information of the at least one target object within the historical time period M; inputting the first offset matrix into a neural network, and outputting by the neural network a second offset matrix for representing walking behavior information of the at least one target object within a future time period M?; and decoding the second offset matrix to obtain the walking behavior prediction information of the at least one target object within the future time period M?.
    Type: Application
    Filed: October 30, 2018
    Publication date: March 7, 2019
    Applicant: BEIJING SENSETIME TECHNOLOGY DEVELOPMENT CO., LTD
    Inventors: Shuai YI, Hongsheng LI, Xiaogang WANG
  • Publication number: 20190073525
    Abstract: Provided is a sign language recognition system, and the sign language recognition system includes an acquisition unit configured to acquire an electromyogram signal of a user from a sensor measurement device worn around an arm of the user, an extraction unit configured to extract a muscle active section from the electromyogram signal to detect a sign language gesture of the user, a producing unit configured to produce a first feature vector by performing signal processing to the muscle active section, a search unit configured to search a signal corresponding to the first feature vector in a database, and an output unit configured to output a text corresponding to the searched signal.
    Type: Application
    Filed: October 17, 2016
    Publication date: March 7, 2019
    Inventors: Young Ho KIM, Seong Jung KIM, Han Soo LEE, Jong Man KIM, Min JO, Eun Kyoung CHOI, Soon Jae AHN, Young Jae JEONG
  • Publication number: 20190073526
    Abstract: An authentication system and method is configured to correlate a first computer mounted with a touch panel owned by a store and a second computer, such as a smart device held by a client side, for performing a contact operation using an input device, and clarifying that both exist in the same space based on a time difference in the time of contact of both.
    Type: Application
    Filed: September 5, 2017
    Publication date: March 7, 2019
    Inventors: Hiroshi Kirita, Junpei Shibata, Hiroki Oyama, Norikazu Nakato
  • Publication number: 20190073527
    Abstract: Context data associated with image content is described herein. In some implementations, image content is received from an imager. Context data associated with the received image content is gathered. The image content is printed to provide a printed image, and a readable tag embodying the context data is physically associated with the printed image.
    Type: Application
    Filed: September 6, 2017
    Publication date: March 7, 2019
    Applicant: Motorola Mobility LLC
    Inventors: Scott Patrick DeBates, Douglas Alfred Lautner
  • Publication number: 20190073528
    Abstract: A method, computer system, and a computer program product for identifying sections in a document based on a plurality of visual features is provided. The present invention may include receiving a plurality of documents. The present invention may also include extracting a plurality of content blocks. The present invention may further include determining the plurality of visual features. The present invention may then include grouping the extracted plurality of content blocks into a plurality of categories. The present invention may also include generating a plurality of closeness scores for the plurality of categories by utilizing a Visual Similarity Measure. The present invention may further include generating a plurality of Association Matrices on the plurality of categories for each of the received plurality of documents based on the Visual Similarity Measure. The present invention may further include merging the plurality of categories into a plurality of clusters.
    Type: Application
    Filed: September 7, 2017
    Publication date: March 7, 2019
    Inventors: Lalit Agarwalla, Rizwan Dudekula, Purushothaman K. Narayanan, Sujoy Sett
  • Publication number: 20190073529
    Abstract: The present invention relates to a system and a method for comparing information contained on at least two documents belonging to an entity. The present invention includes at least one device configured to receive information from at least one first document and at least one second document; then, compare at least one first document information and at least one second document information; and determine whether at least one second document contains at least one first document information. The present invention then outputs a result of whether the at least one second document contains at least one first document information.
    Type: Application
    Filed: October 1, 2018
    Publication date: March 7, 2019
    Inventors: Frank Mandelbaum, Russell T. Embry
  • Publication number: 20190073530
    Abstract: A camera system comprises an image capturing device, and connected to it are an object classification module and a calibration module. The object classification module is operable to determine whether or not an object in an image is a member of an object class, and the calibration module is operable to estimate representative sizes of the object. The object classification module may determine a confidence parameter that is used by the calibration module, or conversely, the calibration module may produce a size that is used by the classification module.
    Type: Application
    Filed: November 7, 2018
    Publication date: March 7, 2019
    Inventors: Mahesh SAPTHARISHI, Dimitri A. LISIN, Aleksey LIPCHIN, Igor REYZIN
  • Publication number: 20190073531
    Abstract: Described are methods and systems for determining authenticity. For example, the method may include providing an object of authentication, capturing characteristic data from the object of authentication, deriving authentication data from the characteristic data of the object of authentication, and comparing the authentication data with an electronic database comprising reference authentication data to provide an authenticity score for the object of authentication. The reference authentication data may correspond to one or more reference objects of authentication other than the object of authentication.
    Type: Application
    Filed: November 13, 2018
    Publication date: March 7, 2019
    Applicant: CLEARMARK SYSTEMS, LLC
    Inventors: Gary L. Duerksen, Seth A. Miller
  • Publication number: 20190073532
    Abstract: An image processing apparatus capable of easily generating a two-dimensional panoramic image at a high speed from a plurality of three-dimensional images includes an acquisition unit configured to acquire a generation condition of a first en-face image generated from a first three-dimensional image of an target eye, a first generation unit configured to generate a second en-face image from a second three-dimensional image of the target eye by applying the generation condition acquired by the acquisition unit to the second three-dimensional image, and a second generation unit configured to generate a combined image by combining the first en-face image with the second en-face image.
    Type: Application
    Filed: August 31, 2018
    Publication date: March 7, 2019
    Inventors: Daisuke Kawase, Hiroki Uchida, Osamu Sagano
  • Publication number: 20190073533
    Abstract: Systems and methods for robust biometric applications using a detailed eye shape model are described. In one aspect, after receiving an eye image of an eye (e.g., from an eye-tracking camera on an augmented reality display device), an eye shape (e.g., upper or lower eyelids, an iris, or a pupil) of the eye in the eye image is calculated using cascaded shape regression methods. Eye features related to the estimated eye shape can then be determined and used in biometric applications, such as gaze estimation or biometric identification or authentication.
    Type: Application
    Filed: September 1, 2017
    Publication date: March 7, 2019
    Inventors: Jixu Chen, Gholamreza Amayeh
  • Publication number: 20190073534
    Abstract: A method and system for multi-spectral imagery acquisition and analysis, the method including capturing preliminary multi-spectral aerial images according to pre-defined survey parameters at a pre-selected resolution, automatically performing preliminary analysis on site or location in the field using large scale blob partitioning of the captured images in real or near real time, detecting irregularities within the pre-defined survey parameters and providing an output corresponding thereto, and determining, from the preliminary analysis output, whether to perform a second stage of image acquisition and analysis at a higher resolution than the pre-selected resolution.
    Type: Application
    Filed: November 8, 2016
    Publication date: March 7, 2019
    Inventors: IRA DVIR, NITZAN RABINOWITZ BATZ
  • Publication number: 20190073535
    Abstract: There is provided an image capturing apparatus that captures a plurality of images, calculates a three-dimensional position from the plurality of images, and outputs the plurality of images and information about the three-dimensional position. The image capturing apparatus includes an image capturing unit, a camera parameter storage unit, a position calculation unit, a position selection unit, and an image complementing unit. The image capturing unit outputs the plurality of images using at least three cameras. The camera parameter storage unit stores in advance camera parameters including occlusion information. The position calculation unit calculates three dimensional positions of a plurality of points. The position selection unit selects a piece of position information relating to a subject area that does not have an occlusion, and outputs selected position information. The image complementing unit generates a complementary image, and outputs the complementary image and the selected position information.
    Type: Application
    Filed: October 30, 2018
    Publication date: March 7, 2019
    Inventors: Kunio NOBORI, Satoshi SATO, Takeo AZUMA
  • Publication number: 20190073536
    Abstract: A hybrid hyperspectral augmented reality device is disclosed. The hybrid hyperspectral augmented reality device includes multi-level imaging sensors which at least two of a long wave infrared sensor capable of detecting an object blocked from visual line of sight thereof, a hyperspectral imaging sensor capable of detecting chemical properties of the object, and a frequency modulated millimeter-wave imaging radar capable of long range spot scanning of the object. The hyperspectral imaging augmented reality device also includes a wearable computing unit capable of co-registering data from the multi-level spectral imaging sensors and determining, based on said co-registered data, a threat level and geospatial location for the object.
    Type: Application
    Filed: April 9, 2018
    Publication date: March 7, 2019
    Applicant: Syso Ou
    Inventor: Diwaker JHA
  • Publication number: 20190073537
    Abstract: A system includes a computing device that includes a memory configured to store instructions. The system also includes a processor to execute the instructions to perform operations that include determining a ranking of images using a machine learning system. The machine learning system is trained using attributes that represent each of a plurality of training images. The attributes include imagery attributes, social network attributes, and textual attributes. Operations also include producing a listing of the ranked images for selecting one or more of the ranked images for a brand entity associated with the selected ranked images.
    Type: Application
    Filed: September 7, 2017
    Publication date: March 7, 2019
    Inventors: Luis Sanz Arilla, Esteban Raul Siravegna, Emanuele Luzio
  • Publication number: 20190073538
    Abstract: A computer-implemented method and a corresponding system for classifying objects from a stream of images are presented. The method comprises: providing input data comprising data indicative of at least one image stream; processing said input data and extracting from said at least one image stream a plurality of foreground objects; classifying said plurality of objects, said classifying comprising associating at least some of said plurality of objects in accordance with at least one object type, thereby generating at least one group of objects of similar object types; and generating a training database comprising a plurality of data pieces/records, each data piece comprising image data of one of said plurality of foreground objects and a corresponding objects type. The training database is typically configured for use in training of a learning machine system.
    Type: Application
    Filed: September 6, 2016
    Publication date: March 7, 2019
    Inventor: Zvi ASHANI
  • Publication number: 20190073539
    Abstract: A video communications method includes segmenting an image frame or an image frame portion into first and second source network packet blocks. The first source network packet block includes a first number of source network packets and the second network packet block includes a second number of source network packets. The method further includes encoding the first source network packet block to produce a first encoded network packet block and encoding the second source network packet block to produce a second encoded network packet block. The first encoded network packet block includes a first number of encoded network packets and the second encoded network packet block includes a second number of encoded network packets. Still further, the method includes transmitting the first and second encoded network packet blocks over a wireless network. A wireless device and a vehicle may utilize the video communications method.
    Type: Application
    Filed: September 7, 2017
    Publication date: March 7, 2019
    Applicant: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Xin Yu, Fan Bai, Wende Zhang, John Sergakis
  • Publication number: 20190073540
    Abstract: A vehicle control device includes: a recognition unit recognizing a horizontal position of a subject vehicle with respect to a lane in which the subject vehicle is running; and an other-vehicle monitoring control unit executing a predetermined operation in a case in which a state of another vehicle present on a rear side of the subject vehicle satisfies a predetermined condition and changing the predetermined condition on the basis of the horizontal position recognized by the recognition unit.
    Type: Application
    Filed: August 28, 2018
    Publication date: March 7, 2019
    Inventors: Hiroyuki Yamada, Makoto Katayama
  • Publication number: 20190073541
    Abstract: A lane determining system for a vehicle includes a camera disposed at the vehicle so as to have a field of view forward of the vehicle. The camera captures image data. A non-vision based sensor is disposed at the vehicle so as to have a field of sensing forward of the vehicle. The non-vision based sensor captures sensor data. A control includes at least one processor operable to process image data captured by the camera and sensor data captured by the non-vision based sensor. The control, responsive to processing of captured image data, detects visible lane markers painted on the road along which the vehicle is traveling. The control, responsive to processing of captured sensor data, detects road-embedded elements disposed along the road. The control determines at least the lane along which the vehicle is traveling based on the detected lane markers or the detected road-embedded elements.
    Type: Application
    Filed: September 6, 2018
    Publication date: March 7, 2019
    Inventor: Krishna Koravadi
  • Publication number: 20190073542
    Abstract: In one example, a computing system is configured to obtain orientation data describing an orientation of the camera and geolocation data describing a location of the camera. The computing system is further configured to select, from one or more candidate images from an image database and based at least on the orientation data and the geolocation data, a best-matched image for a current image captured by the camera. The computing system is also configured to determine a location of a vehicle lane in the current image based on at least one lane marker visible in the best-matched image and output at least one indication of the location of the vehicle lane.
    Type: Application
    Filed: September 7, 2018
    Publication date: March 7, 2019
    Inventors: Junaed Sattar, Jiawei Mo
  • Publication number: 20190073543
    Abstract: An image information comparison system is provided that can effectively utilize image information that is captured by an on-board camera for the purposes other than benefiting the vehicle on which the on-board camera is mounted. Image information comparison system 1 has comparison data storage 13 that stores comparison data; on-board camera A, B, C that is mounted on a vehicle, the camera capturing an image that is external to the vehicle; data comparator 14 that compares image information that is captured by on-board camera A, B, C with the comparison data that are stored in the comparison data storage 13 based on biometric authentication technology, character recognition technology or image recognition technology; and reporting means 17 that report a result of comparison made by data comparator 14.
    Type: Application
    Filed: September 5, 2016
    Publication date: March 7, 2019
    Inventor: Mikio ICHINOSE
  • Publication number: 20190073544
    Abstract: A system mountable in a vehicle to provide object detection in the vicinity of the vehicle. The system includes a camera operatively attached to a processor. The camera is mounted externally at the rear of the vehicle. The field of view of the camera is substantially in the forward direction of travel of the vehicle along the side of the vehicle. Multiple image frames are captured from the camera. Yaw of the vehicle may be input or the yaw may be computed from the image frames. Respective portions of the image frames are selected responsive to the yaw of the vehicle. The image frames are processed to detect thereby an object in the selected portions of the image frames.
    Type: Application
    Filed: November 6, 2018
    Publication date: March 7, 2019
    Applicant: MOBILEYE VISION TECHNOLOGIES LTD.
    Inventors: Yaniv ELIMALECH, Gideon STEIN
  • Publication number: 20190073545
    Abstract: A plausibility check module for a vehicle driver assistance system, including at least one sensor for detecting the vehicle surroundings, and Kl-module(s) to classify objects in the surroundings based on the sensor data supplied by the sensor with an internal processing chain established by parameters, the plausibility check module receiving pieces of reference information about objects in the surroundings supplied by other vehicles and/or by an infrastructure, and comparing the pieces of reference information with the classification result by the Kl-module and to initiate at least one measure for a deviation established by the comparison, so that the parameters of the processing chain of the Kl-module are adapted to the effect that the deviation is reduced in comparable situations. Also described are a driver assistance system having the plausibility check module and Kl-module(s), a method for calibrating a sensor for detecting the vehicle surroundings, and an associated computer program.
    Type: Application
    Filed: August 23, 2018
    Publication date: March 7, 2019
    Inventors: Maxim Dolgov, Thomas Michalke, Florian Wildschuette, Hendrik Fuchs, Ignacio Llatser Marti
  • Publication number: 20190073546
    Abstract: A driver state recognition apparatus that recognizes a state of a driver of a vehicle provided with an autonomous driving system includes an image acquisition unit that acquires an image captured by a camera that captures the driver, a hand detection unit that detects the hands of the driver from the image acquired by the image acquisition unit, an object holding state detection unit that detects the holding state of an object by the hands of the driver detected by the hand detection unit, and a readiness determination unit that determines whether the driver is in a state of being able to immediately operate a steering wheel of the vehicle during autonomous driving, based on the holding state of the object detected by the object holding state detection unit.
    Type: Application
    Filed: July 10, 2018
    Publication date: March 7, 2019
    Applicant: OMRON Corporation
    Inventors: Hatsumi AOI, Tomoyoshi AIZAWA, Tadashi HYUGA, Kazuyoshi OKAJI, Koji TAKIZAWA, Hiroshi SUGAHARA
  • Publication number: 20190073547
    Abstract: Personal emotional profile generation uses cognitive state analysis for vehicle manipulation. Cognitive state data is obtained from an individual. The cognitive state data is extracted, using one or more processors, from facial images of an individual captured as they respond to stimuli within a vehicle. The cognitive state data extracted from facial images is analyzed to produce cognitive state information. The cognitive state information is categorized, using one or more processors, against a personal emotional profile for the individual. The vehicle is manipulated, based on the cognitive state information, the categorizing, and the stimuli. The personal emotional profile is generated by comparing the cognitive state information of the individual with cognitive state norms from a plurality of individuals and is based on cognitive state data for the individual that is accumulated over time. The cognitive state information is augmented based on audio data collected from within the vehicle.
    Type: Application
    Filed: October 29, 2018
    Publication date: March 7, 2019
    Applicant: Affectiva, Inc.
    Inventors: Rana el Kaliouby, Abdelrahman N. Mahmoud, Seyedmohammad Mavadati, Gabriele Zijderveld
  • Publication number: 20190073548
    Abstract: A magnetic ink reader includes a conveyance mechanism for a sheet, a magnetizing mechanism configured to magnetize magnetic ink on the sheet and including a magnet having a first side of a first magnetic polarity, that is arranged to face a first surface of the sheet, and a yoke that is formed of a soft magnetic material and includes a base portion attached directly to a second side of the magnet, and an extension portion extending from the base portion such that an end surface of the extension portion faces a second surface of the sheet, and a magnetic detection head along the conveyance path and configured to detect magnetism of the magnetized magnetic ink on the sheet. A first distance between the conveyance path and the first side of the magnet is less than a second distance between the conveyance path and the end surface of the yoke.
    Type: Application
    Filed: September 5, 2018
    Publication date: March 7, 2019
    Inventors: Tsuyoshi SANADA, Yuji KAWAMORITA
  • Publication number: 20190073549
    Abstract: A magnetic ink reader includes a conveyance mechanism for a sheet, a magnetizing mechanism configured to magnetize magnetic ink on the sheet and including a magnet having a first side of a first polarity, that is arranged to face the sheet, and a yoke that is formed of a soft magnetic material and includes a base portion attached directly to a second side of the magnet, and a partition wall of a second polarity extending towards the conveyance path, such that a side surface of the partition wall faces a third side of the magnet and an end surface of the partition wall faces the sheet, and a magnetic detection head for detecting magnetism of magnetized magnetic ink. A first distance between the conveyance path and the first side of the magnet and a second distance between the conveyance path and the end surface of the partition wall are different.
    Type: Application
    Filed: September 5, 2018
    Publication date: March 7, 2019
    Inventors: Tsuyoshi SANADA, Yuji KAWAMORITA
  • Publication number: 20190073550
    Abstract: A sensor calibration target configured for sensor calibration relative to a common frame of reference is disclosed. The sensor calibration target comprises a first surface at a first predefined depth bearing a first set of indicia at respective first heights and having respective first predefined shifts, each of the first indicia encoding a corresponding first height. The sensor calibration target further comprises a second surface at a second predefined depth bearing a second set of indicia at respective second heights and having respective second predefined shifts, each of the second indicia encoding a corresponding second height.
    Type: Application
    Filed: March 5, 2018
    Publication date: March 7, 2019
    Inventor: Alexandre Obotnine
  • Publication number: 20190073551
    Abstract: Embodiments of the present disclosure provide a method and apparatus for detecting a license plate.
    Type: Application
    Filed: March 8, 2017
    Publication date: March 7, 2019
    Inventors: Shiliang PU, Yi NIU, Zuozhou PAN, Binghua LUO
  • Publication number: 20190073552
    Abstract: The present disclosure discloses an image processing method and device. The image processing method includes: dividing a detection image into a plurality of first subregions, dividing a template image into a plurality of second subregions, calculating a principal rotation direction of each first subregion with respect to the corresponding second subregion; and calculating a principal rotation direction of the detection image according to the principal rotation directions of the plurality of first subregions.
    Type: Application
    Filed: March 8, 2018
    Publication date: March 7, 2019
    Applicant: BOE TECHNOLOGY GROUP CO., LTD.
    Inventor: Jinglin YANG
  • Publication number: 20190073553
    Abstract: Region proposal is described for image regions that include objects of interest. Feature maps from multiple layers of a convolutional neural network model are used. In one example a digital image is received and buffered. Layers of convolution are performed on the image to generate feature maps. The feature maps are reshaped to a single size. The reshaped feature maps are grouped by sequential concatenation to form a combined feature map. Region proposals are generated using the combined feature map by scoring bounding box regions of the image. Objects are detected and classified objects in the proposed regions using the feature maps.
    Type: Application
    Filed: February 17, 2016
    Publication date: March 7, 2019
    Inventors: Anbang YAO, Tao KONG, Yurong CHEN
  • Publication number: 20190073554
    Abstract: A method of detecting an edge of a support surface by an imaging controller includes: obtaining a plurality of depth measurements captured by a depth sensor and corresponding to an area containing the support surface; selecting, by the imaging controller, a candidate set of the depth measurements based on at least one of (i) an expected proximity of the edge of the support surface to the depth sensor, and (ii) an expected orientation of the edge of the support surface relative to the depth sensor; fitting, by the imaging controller, a guide element to the candidate set of depth measurements; and detecting, by the imaging controller, an output set of the depth measurements corresponding to the edge from the candidate set of depth measurements according to a proximity between each candidate depth measurement and the guide element.
    Type: Application
    Filed: September 7, 2017
    Publication date: March 7, 2019
    Inventor: Richard Jeffrey Rzeszutek
  • Publication number: 20190073555
    Abstract: A system and method for extraction of design elements of a fashion product is provided. The system includes a memory having computer-readable instructions stored therein. The system further includes a processor configured to access a catalogue image of a fashion product. In addition, the processor is configured to segment the catalogue image of the fashion product to determine an article of interest of the fashion product. The processor is further configured to generate an outer contour of the article of interest using a contour tracing technique. Moreover, the processor is configured to analyze coordinates of the generated contour based upon convexity defects of the contour to identify one or more design points. Furthermore, the processor is configured to extract one or more design elements of the fashion product using the identified design points.
    Type: Application
    Filed: May 15, 2018
    Publication date: March 7, 2019
    Applicant: Myntra Designs Private Limited
    Inventor: Makkapati Vishnu VARDHAN
  • Publication number: 20190073556
    Abstract: An image processing system and method for determining one or more attributes of a fashion apparel is provided. The system includes a pattern template. The pattern template further includes a plurality of patterns and the fashion apparel is positioned on top of the patterned template. The system further includes an imaging device configured to capture an image of the fashion apparel positioned on top of the pattern template. In addition, the system includes a size and color determination module coupled to the imaging sensor and configured to receive the image and extract a size and a color of the fashion apparel by using the plurality of patterns in the pattern template.
    Type: Application
    Filed: September 6, 2018
    Publication date: March 7, 2019
    Applicant: Myntra Designs Private Limited
    Inventor: Makkapati Vishnu VARDHAN
  • Publication number: 20190073557
    Abstract: An apparatus includes a memory configured to store training data used for automatically sorting objects. The apparatus acquires a first captured-image that is captured at a first timing before an object-sorting work for sorting objects is performed, and a second captured-image that is captured at a second timing after the object-sorting work has been performed, and extracts, from each of the first captured-image and the second captured-image, a feature amount of an object-image that is an image of an object included in each of the first captured-image and the second captured-image. The apparatus stores, in the memory, as the training data, a first feature amount corresponding to a first object whose object-image is included in both the first captured image and the second captured image, or a second feature amount corresponding to a second object whose object-image is included in only one of the first captured-image and the second captured-image.
    Type: Application
    Filed: September 5, 2018
    Publication date: March 7, 2019
    Applicant: FUJITSU LIMITED
    Inventors: Yuji Matsuda, Kentaro TSUJI, EIGO SEGAWA
  • Publication number: 20190073558
    Abstract: An aspect of the present disclosure includes acquiring an image representing a shadow component in an image capturing environment, the shadow component being reflected in a multi-valued image obtained by capturing an image of a subject; specifying an area having a luminance greater than a predetermined luminance value, the area being included in the image representing the shadow component acquired in the acquiring; correcting the image in such a manner that a luminance of an outer peripheral area of the specified area deceases; and generating a binary image by performing binarization processing on a pixel value of a pixel of interest in the multi-valued image based on a pixel value at the same coordinates as those of the pixel of interest in the corrected image.
    Type: Application
    Filed: August 30, 2018
    Publication date: March 7, 2019
    Inventor: Ritsuko Otake
  • Publication number: 20190073559
    Abstract: A method of label detection includes: obtaining, by an imaging controller, an image depicting a shelf; increasing an intensity of a foreground subset of image pixels exceeding an upper intensity threshold, and decreasing an intensity of a background subset of pixels below a lower intensity threshold; responsive to the increasing and the decreasing, (i) determining gradients for each of the pixels and (ii) selecting a candidate set of the pixels based on the gradients; overlaying a plurality of shelf candidate lines on the image derived from the candidate set of pixels; identifying a pair of the shelf candidate lines satisfying a predetermined sequence of intensity transitions; and generating and storing a shelf edge bounding box corresponding to the pair of shelf candidate lines.
    Type: Application
    Filed: September 7, 2017
    Publication date: March 7, 2019
    Inventors: Richard Jeffrey Rzeszutek, Vlad Gorodetsky
  • Publication number: 20190073560
    Abstract: Techniques are disclosed for identifying discriminative, fine-grained features of an object in an image. In one example, an input device receives an image. A machine learning system includes a model comprising a first set, a second set, and a third set of filters. The machine learning system applies the first set of filters to the received image to generate an intermediate representation of the received image. The machine learning system applies the second set of filters to the intermediate representation to generate part localization data identifying sub-parts of an object and one or more regions of the image in which the sub-parts are located. The machine learning system applies the third set of filters to the intermediate representation to generate classification data identifying a subordinate category to which the object belongs. The system uses the part localization and classification data to perform fine-grained classification of the object.
    Type: Application
    Filed: August 31, 2018
    Publication date: March 7, 2019
    Inventors: Bogdan Calin Mihai Matei, Xiyang Dai, John Benjamin Southall, Nhon Hoc Trinh, Harpreet Sawhney
  • Publication number: 20190073561
    Abstract: Embodiments of an active or laser polarimeter are disclosed that transmit multiple independent and tunable temporally-multiplexed polarization states and record or image, at video rates if necessary, the polarized intensity or irradiance reflected or transmitted by objects illuminated by those states, and apply the recorded data to material and/or object classification and recognition using classification algorithms that exploit features of polarization signatures dependent on material type, texture, and/or object shape. The polarimeter also generally records and utilizes one or more passive polarization measurements in order to realize a hybrid active-passive polarimeter. The polarimeter channels are configured and tuned to access multi-dimensional signature spaces specified by existing signature models and/or measurements, with polarization-modulator settings derived by a newly-disclosed subspace-projection algorithm that maximizes a target contrast parameter.
    Type: Application
    Filed: December 22, 2016
    Publication date: March 7, 2019
    Inventors: Brian G. Hoover, Pablo A. Reyes, David E. Taliaferro, Virgil N. Kohlhepp, III
  • Publication number: 20190073562
    Abstract: A method for identifying an object within a video sequence, wherein the video sequence comprises a sequence of images, wherein the method comprises, for each of one or more images of the sequence of images: using a first neural network to determine whether or not an object of a predetermined type is depicted within the image; and in response to the first neural network determining that an object of the predetermined type is depicted within the image, using an ensemble of second neural networks to identify the object determined as being depicted within the image.
    Type: Application
    Filed: September 6, 2017
    Publication date: March 7, 2019
    Applicant: IRDETO B.V.
    Inventors: Milosh Stolikj, Dmitri Jarnikov
  • Publication number: 20190073563
    Abstract: A unit is disclosed for generating combined feature maps in accordance with a processing task to be performed, the unit comprising a feature map generating unit for receiving more than one modality and for generating more than one corresponding feature map using more than one corresponding transformation; wherein the generating of each of the more than one corresponding feature map is performed by applying a given corresponding transformation on a given corresponding modality, wherein the more than one corresponding transformation is generated following an initial training performed in accordance with the processing task to be performed and a combining unit for selecting and combining the corresponding more than one feature map generated by the feature map generating unit in accordance with at least one combining operation and for providing at least one corresponding combined feature map; wherein the combining unit is operating in accordance with the processing task to be performed and the combining operation
    Type: Application
    Filed: March 17, 2017
    Publication date: March 7, 2019
    Applicant: IMAGIA CYBERNETICS INC.
    Inventors: Nicolas CHAPADOS, Nicolas GUIZARD, Mohammad HAVAEI, Yoshua BENGIO
  • Publication number: 20190073564
    Abstract: The technology disclosed uses a combination of an object detector and an object tracker to process video sequences and produce tracks of real-world images categorized by objects detected in the video sequences. The tracks of real-world images are used to iteratively train and re-train the object detector and improve its detection rate during a so-called “training cycle”. Each training cycle of improving the object detector is followed by a so-called “training data generation cycle” that involves collaboration between the improved object detector and the object tracker. Improved detection by the object detector causes the object tracker to produce longer and smoother tracks tagged with bounding boxes around the target object. Longer and smoother tracks and corresponding bounding boxes from the last training data generation cycle are used as ground truth in the current training cycle until the object detector's performance reaches a convergence point.
    Type: Application
    Filed: October 2, 2018
    Publication date: March 7, 2019
    Applicant: SENTIENT TECHNOLOGIES (BARBADOS) LIMITED
    Inventor: Antoine SALIOU
  • Publication number: 20190073565
    Abstract: The technology disclosed uses a combination of an object detector and an object tracker to process video sequences and produce tracks of real-world images categorized by objects detected in the video sequences. The tracks of real-world images are used to iteratively train and re-train the object detector and improve its detection rate during a so-called “training cycle”. Each training cycle of improving the object detector is followed by a so-called “training data generation cycle” that involves collaboration between the improved object detector and the object tracker. Improved detection by the object detector causes the object tracker to produce longer and smoother tracks tagged with bounding boxes around the target object. Longer and smoother tracks and corresponding bounding boxes from the last training data generation cycle are used as ground truth in the current training cycle until the object detector's performance reaches a convergence point.
    Type: Application
    Filed: September 5, 2018
    Publication date: March 7, 2019
    Applicant: SENTIENT TECHNOLOGIES (BARBADOS) LIMITED
    Inventor: Antoine SALIOU
  • Publication number: 20190073566
    Abstract: Methods and systems for training a learning based defect classifier are provided. One method includes training a learning based defect classifier with a training set of defects that includes identified defects of interest (DOIs) and identified nuisances. The DOIs and nuisances in the training set include DOIs and nuisances identified on at least one training wafer and at least one inspection wafer. The at least one training wafer is known to have an abnormally high defectivity and the at least one inspection wafer is expected to have normal defectivity.
    Type: Application
    Filed: August 22, 2018
    Publication date: March 7, 2019
    Inventor: Bjorn Brauer