Patents Issued in September 12, 2019
-
Publication number: 20190278990Abstract: A heterogeneous convolutional neural network (HCNN) system includes a visual reception system generating an input image. A feature extraction layer (FEL) portion of convolutional neural networks includes multiple convolution, pooling and activation layers stacked together. The FEL includes multiple stacked layers, a first set of layers learning to represent data in a simple form including horizontal and vertical lines and blobs of colors. Following layers capture more complex shapes such as circles, rectangles, and triangles. Subsequent layers pick up complex feature combinations to form a representation including wheels, faces and grids. The FEL portion outputs data to each of: a first sub-network which performs a first task of object detection, classification, and localization for classes of objects in the input image to create a detected object table; and a second sub-network which performs a second task of defining a pixel level segmentation to create a segmentation data set.Type: ApplicationFiled: March 5, 2019Publication date: September 12, 2019Inventors: Iyad Faisal Ghazi Mansour, Heinz Bodo Seifert
-
Publication number: 20190278991Abstract: A control apparatus includes a processor that executes a first point cloud generation process including a first imaging process of acquiring a first image according to a first depth measuring method and a first analysis process of generating a first point cloud and a second point cloud generation process including a second imaging process of acquiring a second image according to a second depth measuring method and a second analysis process of generating a second point cloud, and detects the object using the first point cloud or the second point cloud. The first point cloud generation process completes in a shorter time than the second point cloud generation process, and the processor starts the second point cloud generation process after the first imaging process and discontinues the second point cloud generation process if the first point cloud satisfies a predetermined condition of success.Type: ApplicationFiled: March 8, 2019Publication date: September 12, 2019Inventors: Masaki HAYASHI, Tomonori MANO
-
Publication number: 20190278992Abstract: An augmented reality system is provided. Aspects includes a device comprising a user interface, a camera, and a controller, the controller operable to receive data associated with a repair item. The controller is further operable to capture, by the camera, media associated with the repair item and analyze the data and the media to determine a candidate repair component of the repair item, wherein the candidate repair component is located at a target location.Type: ApplicationFiled: March 12, 2018Publication date: September 12, 2019Inventors: Syed F. Hossain, Joshua Schaeffer, Gregg Arquero, Steven Burchfield
-
Publication number: 20190278993Abstract: Systems and methods for presenting an augmented reality view are disclosed. Embodiments include a system with a database for personalizing an augmented reality view of a physical environment using at least one of a location of a physical environment or a location of a user. The system may further include a hardware device in communication with the database, the hardware device including a renderer configured to render the augmented reality view for display and a controller configured to determine a scope of the augmented reality view authenticating the augmented reality view. The hardware device may include a processor configured to receive the augmented reality view of the physical environment, and present, via a display, augmented reality content to the user while the user is present in the physical environment, based on the determined scope of the augmented reality view.Type: ApplicationFiled: September 26, 2018Publication date: September 12, 2019Applicant: Capital One Services, LLCInventors: Jason Richard Hoover, Micah Price, Sunil Subrahmanyam Vasisht, Qiaochu Tang, Geoffrey Dagley, Stephen Michael Wylie
-
Publication number: 20190278994Abstract: Disclosed herein are systems and methods for a photograph driven vehicle identification system. In some embodiments, a system for image-based vehicle identification includes a database, an image processor, and a vehicle search engine. The database can include vehicle information. The image processor may apply one or more machine learning models on images received by a user device. The user device can include a camera that obtains the images. The user device can provide a display having images of a vehicle and information associated with the vehicle through a user interface (UI) of the user device. The display can include a first portion at a first location of the UI, and a second portion at a second location of the UI. The first portion and the second portion may be provided at a single instance. The vehicle search engine may identify one or more vehicles in the images received.Type: ApplicationFiled: October 3, 2018Publication date: September 12, 2019Applicant: Capital One Services, LLCInventors: Derek Bumpas, Stewart Youngblood, Mithra Kosur Venuraju, Amit Deshpande, Jason Hoover, Daniel Martinez, William Hardin, Satish Chikkaveerappa, Majaliwa Bass, Jacob Guiles, Sona Solbrook, Valerie Colon, Khai Ha, Micah Price, Qiaochu Tang, Stephen Wylie, Geoffrey Dagley, Jeremy Huang, Venkata Satya Parcha
-
Publication number: 20190278995Abstract: Systems and methods for tracking objects in a field of view are disclosed. In one embodiment a method may include obtaining, from the non-transient electronic storage, object data. The object data may include a position of one or more objects as a function of time in a field of view. The method may include generating, with the one or more physical computer processors and the one or more AR components, a first virtual object to depict at least one or more of the object data of a first object at a first time. The method may include displaying, via the display, the first virtual object.Type: ApplicationFiled: March 7, 2019Publication date: September 12, 2019Applicant: Disney Enterprises, Inc.Inventors: Mark R. Mine, Steven M. Chapman, Alexa L. Hale, Joseph M. Popp, Dawn J. Binder, Calis O. Agyemang, Alice Taylor
-
Publication number: 20190278996Abstract: An information processing device of the present invention includes a first acquirer, a second acquirer, an analyzer, and a creating unit. The first acquirer is configured to acquire a plurality of images in which a working environment is photographed. The second acquirer is configured to acquire a plurality of setting patterns including analysis setting values. The analysis setting values are setting values regarding an analysis of markers photographed in the images. The analyzer is configured to analyze the markers from the respective images acquired by the first acquirer based on the analysis setting values. The analysis setting values are included in the plurality of respective setting patterns acquired by the second acquirer. The creating unit is configured to create total information for each of the setting patterns. The total information is based on an analysis process of the markers corresponding to the setting patterns by the analyzer.Type: ApplicationFiled: December 22, 2017Publication date: September 12, 2019Inventor: Masakazu KOMIYAMA
-
Publication number: 20190278997Abstract: A method for offline-service multi-user interaction based on augmented reality (AR) includes scanning, by an AR client terminal of a user, an offline service label at an offline service site. Information of the offline service label is transmitted to a server terminal. Based on the information of the offline service label, the server terminal establishes a service group including the user and a second user that scanned the offline service label. In response to transmitting the information of the offline service label, service data is received from the server terminal. The service data includes information related to the user and information related to the second user. Based on the service data, a service interactive interface is outputted. The service interactive interface displays the information related to the user and the information related to the second user at a position corresponding to the offline service label in an AR scene.Type: ApplicationFiled: May 24, 2019Publication date: September 12, 2019Applicant: Alibaba Group Holding LimitedInventors: Huanmi Yin, Xiaodong Zeng, Feng Lin, Jun Wu
-
Publication number: 20190278998Abstract: An object region identifying apparatus according to an embodiment identifies to which one of a plurality of predetermined object classes each pixel of an image belongs to label the pixel with an object type. The object region identifying apparatus includes following units. A base cost calculating unit calculates base costs of the respective object classes in each of the pixels. A transition cost estimating unit estimates a transition cost accrued when a transition between the object classes occurs between adjacent pixels in the image. A cumulative cost calculating unit calculates cumulative costs of the respective object classes in each of the pixels by accumulating the base cost and the transition cost for the respective object classes along a scanning direction set on the image. A class determining unit determines the object class of each of the pixels based on the corresponding cumulative cost.Type: ApplicationFiled: August 30, 2018Publication date: September 12, 2019Applicant: Kabushiki Kaisha ToshibaInventor: Akihito SEKI
-
Publication number: 20190278999Abstract: The invention provides a distinguishing device for distinguishing, between vehicles passing in front of the device, a heavy goods vehicle (HGV) from a coach including over its entire length windows between a top side member and a bottom side member, the device comprising: a vertical stack of emitters of incident beams towards at least top halves of flanks of at least some of the vehicles; a vertical stack of receivers for detecting the beams that are reflected by the flanks of said vehicles; calculation means for calculating the time that elapses between each emission of a beam and the detection of the corresponding reflected beam in order to establish an image of each vehicle flank; and image processor means for detecting therein the absence or the presence of a top side member.Type: ApplicationFiled: March 5, 2019Publication date: September 12, 2019Inventors: Samuel ALLIOT, Grégoire CARRION, Eric GUIDON
-
Publication number: 20190279000Abstract: A system for correlating sensor data in a vehicle includes a first sensor disposed on the vehicle to detect a plurality of first objects. A first object identification controller analyzes the first data stream, identifies the first objects, and determines first characteristics associated therewith. A second sensor disposed on the vehicle detects a plurality of second objects. A second object identification controller analyzes the second data stream, identifies the second objects, and determines second characteristics associated therewith. A model generator includes a plausibility voter to generate an environmental model of the objects existing in space around the vehicle. The model generator may use ASIL decomposition to provide a higher ASIL level than that of any of the sensors or object identification controllers alone. Matchings between uncertain objects are accommodated using matching distance probability functions and a distance-probability voter. A method of operation is also provided.Type: ApplicationFiled: March 7, 2018Publication date: September 12, 2019Inventors: MARTIN PFEIFLE, Markus Schupfner, Matthias Schulze
-
Publication number: 20190279001Abstract: According to an embodiment, a captured image check system includes a camera, an image data input unit, a camera view angle judge processor, a road surface state analysis judge processor, and an output unit. The camera photographs a road surface. The image data input unit inputs image data to be used for analyzing a state of the road surface. The camera view angle judge processor performs a judge process of determining whether or not an angle of view of the camera satisfies a first condition for analyzing the state of the road surface on the basis of the image data. The road surface state analysis judge processor performs a judge process of determining whether or not a second condition for analyzing the state of the road surface is satisfied on the basis of image data of the road surface. The output unit outputs results of the judge processes.Type: ApplicationFiled: February 4, 2019Publication date: September 12, 2019Inventors: Takahiko Yamazaki, Nobuyuki Kumakura, Masaki Shiratsuki, Yoko Yonekawa, Akira Wada
-
Publication number: 20190279002Abstract: There is provided herein an apparatus and method for roadside asset tracking and maintenance monitoring having a mobile unit with data capture devices for capturing roadside asset imagery, global positioning system (GPS) receivers and data interfaces for communicating with an asset management server. As such, the apparatus may take roadside imagery for automated asset identification which may include utilising an asset type image recognition technique for automating the identification of the roadside assets.Type: ApplicationFiled: May 20, 2019Publication date: September 12, 2019Inventor: Norman Boyle
-
Publication number: 20190279003Abstract: A lane line detection method including following steps: acquiring an original image by an image capture device, in which the original image includes a ground area and a sky area; setting a separating line between the sky area and the ground area in the original image; measuring an average intensity of a central area above the separating line, and deciding a weather condition according to the average intensity; setting a threshold according to the weather condition, and execute a binarization process according to the threshold to an area below the separating line to obtain a binary image; and using a line detection method to detect a plurality of approximate lane lines in the binary image.Type: ApplicationFiled: August 23, 2018Publication date: September 12, 2019Inventors: Jiun-In GUO, Yi-Ting LAI
-
Publication number: 20190279004Abstract: A vehicle capable of autonomous driving includes a lane detection system. The lane detection system is trained to predict lane lines using training images. The training images are automatically processed by a training module of the lane detection system in order to create ground truth data. The ground truth data is used to train the lane detection system to predict lane lines that are occluded in real-time images of roadways. The lane detection system predicts lane lines of a roadway in a real-time image even though the lane lines maybe indiscernible due to objects on the roadway or due to the position of the lane lines being in the horizon.Type: ApplicationFiled: March 6, 2019Publication date: September 12, 2019Inventor: Youngwook Paul Kwon
-
Publication number: 20190279005Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for selecting locations in an environment of a vehicle where objects are likely centered and determining properties of those objects. One of the methods includes receiving an input characterizing an environment external to a vehicle. For each of a plurality of locations in the environment, a respective first object score that represents a likelihood that a center of an object is located at the location is determined. Based on the first object scores, one or more locations from the plurality of locations are selected as locations in the environment at which respective objects are likely centered. Object properties of the objects that are likely centered at the selected locations are also determined.Type: ApplicationFiled: March 12, 2018Publication date: September 12, 2019Inventors: Abhijit Ogale, Alexander Krizhevsky
-
Publication number: 20190279006Abstract: An object recognition device installed on a vehicle includes a first camera detecting a first object, a second camera detecting a second object, a ranging sensor detecting a third object, and a controller. The controller sets first to third determination ranges with respect to detected positions of the first to third objects, respectively, and executes fusion processing by comparing the determination ranges. First and second pixel densities being pixel densities of the first and second objects are determined based on detected distances of the first and second objects, angles of view and numbers of pixels of the first and second cameras, respectively. The controller sets the first determination range larger as the first pixel density is lower, and sets the second determination range larger as the second pixel density is lower.Type: ApplicationFiled: February 14, 2019Publication date: September 12, 2019Applicant: TOYOTA JIDOSHA KABUSHIKI KAISHAInventors: Jun OZAWA, Shinichi NAGATA
-
Publication number: 20190279007Abstract: The content and installation site of road signs are verified by detecting a road sign in one or more images from at least one camera. Image analysis is used to determine the content of the road sign portrayed in the image that is visually recognisable for a human being. Moreover, the installation site of the road sign is determined, and data that are not interpretable for a human being are obtained that are provided by the road sign and that represent the content and the installation site of the road sign. If the determined data and the data obtained from the road sign are concordant then the content of the road sign is displayed or announced to a driver of the vehicle for information. If there is no concordance then a further automatic plausibility check can be performed or a question or information can be sent to the driver.Type: ApplicationFiled: May 18, 2017Publication date: September 12, 2019Inventors: Helge ZINNER, Christoph ARNDT
-
Publication number: 20190279008Abstract: A device comprising a processing unit (3) configured to project at least one camera image (Img1-Img8) from at least one camera (Cam1-Cam8) onto a virtual projection surface (Proj) in order to create a virtual image (ImgV) of the vehicle interior (2).Type: ApplicationFiled: February 28, 2019Publication date: September 12, 2019Applicant: ZF Friedrichshafen AGInventors: Jochen Abhau, Wolfgang Vieweger
-
Publication number: 20190279009Abstract: Systems and techniques for monitoring driver state are described herein. In an example, a driver state monitoring system is adapted to receive a set of color images of a person, such as images of a driver of a vehicle with varying levels of illumination in the images. The driver state monitoring system may be further adapted to generate a set of synthesized thermal images from the set of color images. The driver state monitoring system may be further adapted to use a trained thermal image face detector to locate a human face in the synthesized thermal images. The driver state monitoring system may be further adapted to use a trained thermal image facial landmark predictor to locate facial landmarks in the synthesized thermal images. The driver state monitoring system may be further adapted to analyze the facial landmarks in the synthesized thermal images to determine facial feature movements.Type: ApplicationFiled: May 31, 2018Publication date: September 12, 2019Inventors: Akshay Uttama Nambi Srirangam Narashiman, Venkata N. Padmanabhan, Ishit Mehta, Shruthi Bannur, Sanchit Gupta
-
Publication number: 20190279010Abstract: A method, a system and a terminal for identity authentication, and a computer readable storage medium are provided. The method for identity authentication includes: acquiring a facial image of a person to be authenticated, and determining from the facial image facial feature information of the person to be authenticated; determining a suspected object using a face authentication platform according to the facial feature information of the person to be authenticated; acquiring a human body image of the person to be authenticated, and determining from the human body image a plurality of skeleton key points of the person to be authenticated; converting the skeleton key points into feature data; and recognizing an identity of the person to be authenticated according to the feature data of the person to be authenticated and information of the suspected object.Type: ApplicationFiled: October 5, 2018Publication date: September 12, 2019Applicant: Baidu Online Network Technology (Beijing) Co., Ltd .Inventors: Wenbin Xie, Weiqing He, Fanping Liu, Xiangli Chen
-
Publication number: 20190279011Abstract: The technology described herein anonymizes images using a bifurcated neural network. The bifurcated neural network can comprise two portions with a local portion running on a local computer system and a remote portion running in a data center that is connected to the local computer system by a computer network. Together, the local portion and the remote portion form a complete neural network able to classify images, while the image never leaves the local computer system. In aspects of the technology, the local portion of the neural network receives a local image and creates a transformed object. The transformed object is communicated to the remote portion of the bifurcated neural network in the data center for classification.Type: ApplicationFiled: March 12, 2018Publication date: September 12, 2019Inventors: KAMLESH DATTARAM KSHIRSAGAR, FRANK T. SEIDE
-
Publication number: 20190279012Abstract: Disclosed herein is a method of facilitating inspection of industrial infrastructure by one or more industry experts. The method may include receiving, using a communication device, inspection data from at least one monitoring device associated with an industrial infrastructure. Further, the method may include transmitting, using the communication device, the inspection data to a plurality of user devices associated with a plurality of industry experts. Further, the method may include receiving, using the communication device, annotation data corresponding to the inspection data from the plurality of user devices. Further, the method may include storing, using a storage device, the annotation data in association with the inspection data. Further, the method may include analyzing, using a processing device, the annotation data. Further, the method may include generating, using the processing device, an inspection report based on the analyzing.Type: ApplicationFiled: March 12, 2019Publication date: September 12, 2019Inventor: Dusty Birge
-
Publication number: 20190279013Abstract: A sensing system for a vehicle includes a sensor disposed at a vehicle and a control that includes a processor for processing sensor data captured by the sensor. A first polarizer is disposed in a light emitting path of at least one light source of the vehicle and a second polarizer is disposed in a light receiving path of the sensor. The second polarizer has an opposite-handed polarization configuration relative to the first polarizer. Some of the polarized light as polarized by the first polarizer impinges precipitation present in the field of sensing of the sensor and returns toward the sensor as refracted-reflected light. The second polarizer attenuates the refracted-reflected light and allows light reflected from objects present in the sensor's field of sensing to pass through to the sensor. The control, responsive to processing of captured sensor data, detects objects in the field of sensing of the sensor.Type: ApplicationFiled: March 7, 2019Publication date: September 12, 2019Inventors: Traian Miu, Gabriele W. Sabatini
-
Publication number: 20190279014Abstract: A method and an apparatus for detecting an object keypoint, an electronic device, a computer readable storage medium, and a computer program include: obtaining a respective feature map of at least one local regional proposal box of an image to be detected, the at least one local regional proposal box corresponding to at least one target object; and separately performing target object keypoint detection on a corresponding local regional proposal box of the image to be detected according to the feature map of the at least one local regional proposal box.Type: ApplicationFiled: May 27, 2019Publication date: September 12, 2019Applicant: BEIJING SENSETIME TECHNOLOGY DEVELOPMENT CO., LTDInventors: Zhiwei FANG, Junjie YAN
-
Publication number: 20190279015Abstract: A method and an electronic device for enhancing efficiency of searching for a region of interest in a virtual environment are provided. The virtual environment includes a visible scene and an invisible scene. A picture-in-picture (PIP) is displayed in the visible scene as a directional guidance or distance hint related to the region of interest in the invisible scene, thereby saving time and enhancing efficiency of searching for the region of interest.Type: ApplicationFiled: June 19, 2018Publication date: September 12, 2019Inventors: Yung-Ta LIN, Yi-Chi LIAO, Shan-Yuan TENG, Yi-Ju CHUNG, Li-Wei CHAN, Bing-Yu CHEN
-
Publication number: 20190279016Abstract: An image processing device includes an acquisition unit that acquires a read image generated by reading a receipt or a bill and including a special character having a size different from a basic size of a character, a conversion unit that performs conversion processing on the size of the special character included in the read image into a size close to the basic size, and a character recognition unit that performs character recognition processing on the conversion-processed read image.Type: ApplicationFiled: March 11, 2019Publication date: September 12, 2019Applicant: SEIKO EPSON CORPORATIONInventor: Nobuhisa TAKABAYASHI
-
Publication number: 20190279017Abstract: A system and method that gathers and labels images for use by a machine learning algorithm for image classification is disclosed. The method includes acquiring a preview image including a portion of a scene, generating a user interface including the preview image, performing object detection on the preview image to detect a set of items, adding, to the first preview image, a set of regions of interest based on the object detection, each region of interest highlighting a location of a corresponding item in the set of items, receiving an image corresponding to the preview image, determining an object identifier associated with an item, and labeling the item in the image using the object identifier and a region of interest corresponding to the item.Type: ApplicationFiled: March 9, 2018Publication date: September 12, 2019Applicant: Ricoh Co., Ltd.Inventors: Jamey Graham, Sri Kaushik Pavani
-
Publication number: 20190279018Abstract: Processing a dithered image comprising a grid of pixels including defining an array of pixels corresponding to a sub-region of the image; performing edge detection along the rows and the columns of the array; counting the number of edges detected along the rows of the array to determine the number of horizontal edges in the array; counting the number of edges detected along the columns of the array to determine the number of vertical edges in the array; identifying whether the sub-region is dithered based on the number of horizontal and vertical edges in the array; and selectively processing the corresponding sub-region of the image based on whether or not the sub-region is identified to be dithered. The identification step may also be based on the lengths of segments of similar pixels in the lines of the array.Type: ApplicationFiled: March 4, 2019Publication date: September 12, 2019Inventors: Brecht Milis, Michel Dauw, Frédéric Collet
-
Publication number: 20190279019Abstract: An image masking method is provided. The method includes: extracting an object from an image; obtaining characteristic information about the extracted object by analyzing the extracted object; determining whether the extracted object is a masking target according to an input setting value or the obtained characteristic information; and performing masking such that the obtained characteristic information is reflected on the extracted object, in response to determining that the extracted object is the masking target among a plurality of objects extracted from the input image, wherein the setting value is set by an input designating at least a partial region in the input image, and wherein in the determining whether the extracted object is the masking target, an object positioned in the at least a partial region is determined as the masking target among the extracted objects.Type: ApplicationFiled: January 4, 2019Publication date: September 12, 2019Applicant: Hanwha Techwin Co., Ltd.Inventors: Jin Hyuk CHOI, Song Taek JEONG, Jae Cheon SONG
-
Publication number: 20190279020Abstract: A video encoder compresses video for real-time transmission to a video decoder of a remote teleoperator system that provides teleoperator support to the vehicle based on the real-time video. The video encoder recognizes one or more generic objects in captured video that can be removed from the video without affecting the ability of the teleoperator to control the vehicle. The video encoder removes regions of the video corresponding to the generic objects to compress the video, and generates a metadata stream encoding information about the removed objects. The video decoder generates replacement objects for the objects removed the compressed video. The video decoder inserts the rendered replacement objects into relevant regions of the compressed video to reconstruct the scene.Type: ApplicationFiled: March 7, 2019Publication date: September 12, 2019Inventors: Shay Magzimof, David Parunakian
-
Publication number: 20190279021Abstract: A method is disclosed including: receiving raw image data corresponding to a series of raw images; processing the raw image data with an encoder to generate encoded data, where the encoder is characterized by an input/output transformation that substantially mimics the input/output transformation of one or more retinal cells of a vertebrate retina; and applying a first machine vision algorithm to data generated based at least in part on the encoded data.Type: ApplicationFiled: May 23, 2019Publication date: September 12, 2019Inventors: Sheila NIRENBERG, Illya Bomash
-
Publication number: 20190279022Abstract: An object recognition method and a device thereof are provided, the method includes: obtaining a plurality of key points of a test image and grayscale feature information of each of the key points, where the grayscale feature information is obtained according to a grayscale variation in the test image; obtaining hue feature information of each of the key points, where according to hue values of a plurality of adjacent pixels of the key point, the adjacent pixels are divided into a plurality of groups, and one of the groups is recorded as the hue feature information; and determining whether the test image is matched with a reference image according to the grayscale feature information and the hue feature information.Type: ApplicationFiled: May 14, 2018Publication date: September 12, 2019Applicant: Chunghwa Picture Tubes, LTD.Inventors: Chun-Chieh Chiu, Hsiang-Tan Lin, Pei-Lin Hsieh
-
Publication number: 20190279023Abstract: Various embodiments of the systems and methods described herein are directed towards training an artificial neural network to identify color values of a sample by providing image data obtained through multiple image capture devices under a plurality of lighting conditions. The present invention also includes using a pre-trained neural network to identify the color values of a sample having an unknown color value by capturing an image of an unknown color sample and known color reference samples under any illumination or hardware configuration.Type: ApplicationFiled: May 23, 2019Publication date: September 12, 2019Inventor: Zhiling Xu
-
Publication number: 20190279024Abstract: The disclosure includes a system and method for providing visual analysis focalized on a salient event. A video processing application receives a data stream from a capture device, determines an area of interest over an imaging area of the capture device, detects a salient event from the data stream, determines whether a location of the detected salient event is within the area of interest, and in response to the location of the salient event being within the area of interest, identifies a portion of the data stream, based on the salient event, on which to perform an action.Type: ApplicationFiled: March 9, 2018Publication date: September 12, 2019Applicant: Ricoh Co., Ltd.Inventors: Manuel Martinello, Hector H. Gonzalez-Banos
-
Publication number: 20190279025Abstract: An image processing apparatus includes an image pyramid generating section, a memory and a matching section. The image pyramid generating section generates an image pyramid including a plurality of layer images of mutually different sizes, from an input image. The memory stores a first dictionary for detecting a first object and a second dictionary for detecting a second object obtained by reducing the first object at a first predetermined reduction ratio. The matching section performs matching between the first dictionary and between the second dictionary, respectively, and a detection frame image within a detection frame configured to move within the layer image.Type: ApplicationFiled: September 4, 2018Publication date: September 12, 2019Inventors: Miki Yamada, Michio Yamashita, Makoto Oshikiri
-
Publication number: 20190279026Abstract: An image generation apparatus includes a processing circuit and a memory storing at least one computational image. The at least one computational image is a light-field image, a compressive sensing image, or a coded image. The processing circuit (a1) identifies a position of an object in the at least one computational image using a classification device, (a2) generates, using the at least one computational image, a display image in which an indication for highlighting the position of the object is superimposed, and (a3) outputs the display image.Type: ApplicationFiled: May 17, 2019Publication date: September 12, 2019Inventors: SATOSHI SATO, MASAHIKO SAITO, TAKEO AZUMA, KUNIO NOBORI, NOBUHIKO WAKAI
-
Publication number: 20190279027Abstract: Methods and systems for extracting impression marks from a substrate (e.g., paper, foil, textile, etc.). In an example embodiment, an image of a substrate can be captured. Then, physical impressions on the substrate can be detected in the image. The physical impressions are scanned and highlighted a digital image configured, which is indicative of the actual physical impressions. The scanning and highlighting of the physical impressions can involve enhancing the image to digitally and electronically reproduce the physical impressions. This approach can be implemented in the context of a mobile scanning application that scans the physical impression(s) and highlights it, and saves the resulting image as an electronic document.Type: ApplicationFiled: March 7, 2018Publication date: September 12, 2019Inventors: Srinivasarao Bindana, Mahesh Ramasamy, Baskaran Sathishkannah, Liya Stanley
-
Publication number: 20190279028Abstract: The present disclosure provides a method and an apparatus for object re-identification, capable of solving the problem in the related art associated with inefficiency and low accuracy of object re-identification based on multiple frames of images.Type: ApplicationFiled: February 12, 2019Publication date: September 12, 2019Inventors: Naiyan WANG, Jianfu ZHANG
-
Publication number: 20190279029Abstract: Disclosed is a method for determining a relational imprint between two images including the following steps: —the implementation of a first image and of a second image, —a phase of calculating vectors of similarity between tiles belonging respectively to the first and second images, the similarity vectors forming a field of imprint vectors, the field of imprint vectors including at least one haphazard region disordered in the sense of an entropy criterion, —a phase of recording in the guise of relational imprint of a representation of the calculated field of imprint vectors. Also disclosed is a method for authenticating a candidate image with respect to an authentic image implementing the method for determining a relational imprint.Type: ApplicationFiled: May 17, 2017Publication date: September 12, 2019Inventors: Yann BOUTANT, Thierry FOURNEL
-
Publication number: 20190279030Abstract: Methods, systems, and apparatus, for determining fine-grained image similarity. In one aspect, a method includes training an image embedding function on image triplets by selecting image triplets of first, second and third images; generating, by the image embedding function, a first, second and third representations of the features of the first, second and third images; determining, based on the first representation of features and the second representation of features, a first similarity measure for the first image to the second image; determining, based on the first representation of features and the third representation of features, a second similarity measure for the the first image to the third image; determining, based on the first and second similarity measures, a performance measure of the image embedding function for the image triplet; and adjusting the parameter weights of the image embedding function based on the performance measures for the image triplets.Type: ApplicationFiled: May 22, 2019Publication date: September 12, 2019Inventors: Yang Song, Jiang Wang, Charles J. Rosenberg
-
Publication number: 20190279031Abstract: The invention relates to a method of replacing a processing engine in which a first processing engine (25) is replaced with a second processing engine (28) if the first output (26) of the first processing engine (25) and the second output (29) of the second processing engine (28) are determined to be sufficiently similar. The second processing engine (28) is nm in a simulation mode. The first processing engine (25) is run in a production mode or in a simulation mode. Both processing engines use the same data set (21) as input.Type: ApplicationFiled: June 20, 2016Publication date: September 12, 2019Applicant: RES SOFTWARE DEVELOPMENT B.V.Inventors: Bob JANSSEN, Reinhard Peter BRONGERS
-
Publication number: 20190279032Abstract: The present disclosure relates to image preprocessing to improve object recognition. In one implementation, a system for preprocessing an image for object recognition may include at least one memory storing instructions and at least one processor configured to execute the instructions to perform operations. The operations may include receiving the image, detecting a plurality of bounding boxes within the image, grouping the plurality of bounding boxes into a plurality of groups such that bounding boxes within a group have shared areas exceeding an area threshold, deriving a first subset of the plurality of bounding boxes by selecting bounding boxes having highest class confidence scores from at least one group, selecting a bounding box from the first subset having a highest score based on area and class confidence score, and outputting the selected bounding box.Type: ApplicationFiled: May 7, 2019Publication date: September 12, 2019Inventors: Qiaochu TANG, Sunil Subrahmanyam VASISHT, Stephen Michael WYLIE, Geoffrey DAGLEY, Micah PRICE, Jason Richard HOOVER
-
Publication number: 20190279033Abstract: In one aspect, the present disclosure relates to a method for or performing single-pass object detection and image classification. The method comprises receiving image data for an image in a system comprising a convolutional neural network (CNN), the CNN comprising a first convolutional layer, a last convolutional layer, and a fully connected layer; providing the image data to an input of the first convolutional layer; extracting multi-channel data from the output of the last convolutional layer; and summing the extracted data to generate a general activation map; and detecting a location of an object within the image by applying the general activation map to the image data.Type: ApplicationFiled: January 22, 2019Publication date: September 12, 2019Applicant: Capital One Services, LLCInventors: Micah Price, Jason Hoover, Geoffrey Dagley, Stephen Wylie, Qiaochu Tang
-
Publication number: 20190279034Abstract: An image forming apparatus includes a communication interface through which the image forming apparatus communicates with a server and circuitry. The circuitry is configured to: collect learning data; and determine whether to generate a learning model by the server based on the collected learning data or to generate a learning model by the circuitry based on the collected learning data.Type: ApplicationFiled: February 11, 2019Publication date: September 12, 2019Applicant: Ricoh Company, Ltd.Inventor: Hajime KUBOTA
-
Publication number: 20190279035Abstract: Methods and systems are provided for end-to-end text recognition in digitized documents of handwritten characters over multiple lines without explicit line segmentation. An image is received. Based on the image, one or more feature maps are determined. Each of the one or more feature maps include one or more feature vectors. Based at least in part on the one or more feature maps, one or more scalar scores are determined. Based on the one or more scalar scores, one or more attention weights are determined. By applying the one or more attention weights to each of the one or more feature vectors, one or more image summary vectors are determined. Based at least in part on the one or more image summary vectors, one or more handwritten characters are determined.Type: ApplicationFiled: May 24, 2019Publication date: September 12, 2019Inventor: Theodore Damien Christian Bluche
-
Publication number: 20190279036Abstract: A method and a system for end-to-end modeling are provided. The method includes: determining a topological structure of a target-based end-to-end model, where the topological structure includes an input layer, an encoding layer, an code enhancement layer, a filtering layer, a decoding layer and an output layer; the code enhancement layer adds information of a target unit to a feature sequence outputted by the encoding layer, the filtering layer filters a feature sequence added with the information of the target unit collecting multiple pieces of training data; and training parameters of the target-based end-to-end model by using the multiple pieces of the training data.Type: ApplicationFiled: January 11, 2017Publication date: September 12, 2019Applicant: IFLYTEK CO., LTD.Inventors: Jia PAN, Shiliang ZHANG, Shifu XIONG, Si WEI, Guoping HU
-
Publication number: 20190279037Abstract: A multi-task relationship learning system 80 for simultaneously estimating a plurality of prediction models includes a learner 81 for optimizing the prediction models so as to minimize a function that includes a sum total of errors indicating consistency with data and a regularization term deriving sparsity relating to differences between the prediction models, to estimate the prediction models.Type: ApplicationFiled: November 8, 2016Publication date: September 12, 2019Applicant: NEC CorporationInventors: Akira TANIMOTO, Yousuke MOTOHASHI, Ryohei FUJIMAKI
-
Publication number: 20190279038Abstract: Techniques are disclosed for data flow graph node parallel update for machine learning. A first plurality of processing elements is configured to implement a portion of a data flow graph. The nodes include at least one variable node and implement part of a neural network. A second plurality of processing elements is configured to implement a second portion of the data flow graph. These nodes include at least one additional variable node and implement an additional part of the neural network. Training data is issued to the first plurality of processing elements. The training data is used to update variables within the at least one variable node. Additional variables are updated within the at least one additional variable node. The updating includes forwarding training data from the first plurality to the second plurality. The neural network is trained based on the variables that were updated and the additional variables.Type: ApplicationFiled: May 27, 2019Publication date: September 12, 2019Inventor: Christopher John Nicol
-
Publication number: 20190279039Abstract: A non-transitory computer-readable recording medium stores therein a learning program that causes a computer to execute a process including: setting each of scores to each of a plurality of sets of unlabeled data with regard to each of labels used in a plurality of sets of labeled data based on a distance of each of the plurality of sets of unlabeled data with respect to each of the labels; and causing a learning model to learn using a neural network by using the plurality of sets of labeled data respectively corresponding to the labels of the plurality of sets of labeled data, and the plurality of sets of unlabeled data respectively corresponding to the scores of the plurality of sets of unlabeled data with regard to the labels.Type: ApplicationFiled: February 26, 2019Publication date: September 12, 2019Applicant: FUJITSU LIMITEDInventor: YUHEI UMEDA