Patents Issued in April 30, 2020
-
Publication number: 20200134318Abstract: A method for identifying a photovoltaic panel includes: acquiring a grayscale image of an infrared image captured by a camera mounted on a UAV, the grayscale image including an image of a photovoltaic panel; performing edge extraction processing on an image in the grayscale image to obtain a monochrome image including a plurality of horizontal lines and a plurality of vertical lines, the horizontal lines being lines in a first direction, an average length of the lines in the first direction being greater than a preset length, the vertical lines being lines in a second direction, and an average length of the lines in the second direction being less than the preset length; and identifying the photovoltaic panel in the monochrome image based on a relative positional relationship between the horizontal lines and the vertical lines in the monochrome image.Type: ApplicationFiled: December 23, 2019Publication date: April 30, 2020Inventors: Zefei LI, Chao WENG
-
Publication number: 20200134319Abstract: An electronic device is provided for image-based detection of offside in gameplay. The electronic device estimates positions of each player-object of a first team and a second team in a current image and further estimates displacement and velocity values of a soccer-object. The electronic device detects a pass state of the soccer-object based on the displacement and the velocity values. The electronic device determines a set of passive offside positions of a set of player-objects of the first team based on the estimated positions of the each player-object of the first team. The electronic device further detects an active offside state of at least one player-object in the set of player-objects, based on a first distance between the soccer-object and each of the set of player-objects, and transmits a notification to a referee of the gameplay, in real-time or near real time, based on the detected active offside state.Type: ApplicationFiled: October 30, 2018Publication date: April 30, 2020Inventors: SAKET RANJAN, SHREAY KUMAR
-
Publication number: 20200134320Abstract: Current interfaces for displaying information about items appearing in videos are obtrusive and counterintuitive. They also rely on annotations, or metadata tags, added by hand to the frames in the video, limiting their ability to display information about items in the videos. In contrast, examples of the systems disclosed here use neural networks to identify items appearing on- and off-screen in response to intuitive user voice queries, touchscreen taps, and/or cursor movements. These systems display information about the on- and off-screen items dynamically and unobtrusively to avoid disrupting the viewing experience.Type: ApplicationFiled: May 10, 2019Publication date: April 30, 2020Inventors: Vincent Alexander Crossley, Jared Max Browarnik, Tyler Harrison Cooper, Carl Ducey Jamilkowski
-
Publication number: 20200134321Abstract: A pedestrian re-identification method includes: obtaining a target video containing a target pedestrian and at least one candidate video; encoding each target video segment in the target video and each candidate video segment in the at least one candidate segment separately; determining a score of similarity between the each target video segment and the each candidate video segment according to encoding results, the score of similarity being used for representing a degree of similarity between pedestrian features in the target video segment and the candidate video segment; and performing pedestrian re-identification on the at least one candidate video according to the score of similarity.Type: ApplicationFiled: December 25, 2019Publication date: April 30, 2020Applicant: BEIJING SENSETIME TECHNOLOGY DEVELOPMENT CO., LTD.Inventors: Dapeng CHEN, Hongsheng LI, Tong XIAO, Shuai YI, Xiaogang WANG
-
Publication number: 20200134322Abstract: Provided is a robot system including a robot, a distance image sensor that temporally continuously acquires, from above an operating space of the robot, distance image information around the operating space, and an image processing device that processes the acquired distance image information, the image processing device defining, around the operating space, a monitoring area that includes a boundary for enabling entrance into the operating space from the outside, including a storing unit that stores reference distance image information, and detecting, based on the distance image information acquired by the distance image sensor and the reference distance image information stored in the storing unit, whether a stationary object present in the monitoring area is blocking the boundary in a visual field of the distance image sensor.Type: ApplicationFiled: September 10, 2019Publication date: April 30, 2020Applicant: FANUC CORPORATIONInventors: Yuuki TAKAHASHI, Atsushi WATANABE, Minoru NAKAMURA
-
Publication number: 20200134323Abstract: An information processing apparatus detects an object queue from a video frame and generates tracking information indicating a position of each tracking target object, where each object included in the object queue is the tracking target object. The information processing apparatus generates queue behavior information related to a behavior of the object queue at a first time point using the tracking information at the first time point. The information processing apparatus computes an estimated position of each tracking target object at a second time point later than the first time point based on the tracking information and the queue behavior information at the first time point. The information processing apparatus updates the tracking information based on the position of each object detected from the video frame at the second time point and the estimated position of each tracking target object at the second time point.Type: ApplicationFiled: May 18, 2018Publication date: April 30, 2020Applicant: NEC CorporationInventor: Ryoma OAMI
-
Publication number: 20200134324Abstract: Examples of techniques for using fixed-point quantization in deep neural networks are disclosed. In one example implementation according to aspects of the present disclosure, a computer-implemented method includes capturing a plurality of images at a camera associated with a vehicle and storing image data associated with the plurality of images to a memory. The method further includes dispatching vehicle perception tasks to a plurality of processing elements of an accelerator in communication with the memory. The method further includes performing, by at least one of the plurality of processing elements, the vehicle perception tasks for the vehicle perception using a neural network, wherein performing the vehicle perception tasks comprises quantizing a fixed-point value based on an activation input and a synapse weight. The method further includes controlling the vehicle based at least in part on a result of performing the vehicle perception tasks.Type: ApplicationFiled: October 25, 2018Publication date: April 30, 2020Inventors: Shuqing Zeng, Wei Tong, Shige Wang, Roman L. Millett
-
Publication number: 20200134325Abstract: The present disclosure provides a method, system, device, and storage medium for determining whether there is a target road facility at an intersection. The method may include: obtaining trajectory data related to left-turn trajectories of moving objects at a target intersection; extracting information related to feature parameters associated with the target road facility from the trajectory data; determining whether there is the target road facility at the target intersection based on the information related to the feature parameters of the target intersection. The method may determine the road facility configuration of an intersection intelligently, reducing costs of human resources and time.Type: ApplicationFiled: December 16, 2018Publication date: April 30, 2020Applicant: BEIJING DIDI INFINITY TECHNOLOGY AND DEVELOPMENT CO., LTD.Inventors: Weili SUN, Zhihao ZHANG, Zelong DU
-
Publication number: 20200134326Abstract: A driver assistance system detects lane markings in a perspective image of a road in front of a vehicle. The driver assistance system extracts a plurality of features, in particular lane markings, from the perspective image for generating a set of feature coordinates, in particular lane marking coordinates. The system generates a plurality of pairs of feature coordinates, each pair defining a straight line, and estimates a lane curvature on the basis of a subset of the pairs of feature coordinates. For each pair a straight line defined by the pair intersects a predefined target portion of the perspective image, the predefined target portion including a plurality of possible positions of a vanishing point.Type: ApplicationFiled: December 27, 2019Publication date: April 30, 2020Inventors: Atanas BOEV, Onay URFALIOGLU, Panji SETIAWAN
-
Publication number: 20200134327Abstract: An autonomous driving system for a vehicle comprising: an I/O module operative to communicate with an obstacle avoidance server; at least one sensor operative to provide at least an indication of an obstacle in a path of said vehicle; processing circuitry; and an autonomous driving manager to be executed by said processing circuitry and operative to: detect said at least an indication of an obstacle based on data provided by said at least one sensor, drive said vehicle in accordance with a driving policy associated with said obstacle, and send an obstacle report with obstacle information associated with said at least an indication of an obstacle to said obstacle avoidance server.Type: ApplicationFiled: August 20, 2019Publication date: April 30, 2020Inventors: Igal RAICHELGAUZ, Karina Odinaev
-
Publication number: 20200134328Abstract: A method for driving a first vehicle based on information received from a second vehicle, the method may include receiving, by the first vehicle, acquired image information regarding (a) a signature of an acquired image that was acquired by the second vehicle, (b) a location of acquisition of the acquired image; extracting, from the acquired image information, information about objects within the acquired image; and preforming a driving related operation of the first vehicle based on the information about objects within the acquired image.Type: ApplicationFiled: August 20, 2019Publication date: April 30, 2020Inventors: Igal Raichelgauz, Karina Odinaev
-
Publication number: 20200134329Abstract: A method for tracking after an entity, the method may include tracking, by a monitor of a vehicle, a movement of an entity that appears in various images acquired during a tracking period; generating, by a processing circuitry of the vehicle, an entity movement function that represents the movement of the entity during the tracking period; generating, by the processing circuitry of the vehicle, a compressed representation of the entity movement function; and responding to the compressed representation of the entity movement functionType: ApplicationFiled: August 20, 2019Publication date: April 30, 2020Inventors: Igal Raichelgauz, Karina Odinaev
-
Publication number: 20200134330Abstract: An object detection system and method are disclosed. The object detection system comprises at least one image capturing device operably coupled to a work vehicle, wherein the at least one image capturing device is configured to capture images of one or more worksite objects associated with a workman. An electronic data processor is communicatively coupled to the at least one imaging device, the electronic data processor comprising an object recognition device configured to process images received by the image capturing device. A computer readable storage medium comprising machine readable instructions that, when executed by the electronic data processor, cause the object recognition device to: associate a plurality of identifying indicia with the worksite objects; determine an object type of the worksite objects based on the plurality of identifying indicia; and characterize a workman located within a vicinity of the work vehicle based on the determined object type.Type: ApplicationFiled: September 4, 2019Publication date: April 30, 2020Inventor: Mark J. Cherney
-
Publication number: 20200134331Abstract: Techniques including receiving a distorted image from a camera disposed about a vehicle, detecting, in the distorted image, corner points associated with a target object, mapping the corner points to a distortion corrected domain based on one or more camera parameters, mapping the corner points and lines between the corner points back to a distorted domain based on the camera parameters, interpolating one or more intermediate points to generate lines between the corner points in the distortion corrected domain mapping the corner points and the lines between the corner points back to a distorted domain based on the camera parameters, and adjusting a direction of travel of the vehicle based on the located target object.Type: ApplicationFiled: September 30, 2019Publication date: April 30, 2020Inventors: Deepak PODDAR, Soyeb NAGORI, Manu MATHEW, Debapriya MAJI
-
Publication number: 20200134332Abstract: A parking objects or vehicles detection system (60) uses BLE proximity sensing to identify a vehicle (520) with a beacon ID to within about one vehicle length from an access gate (516) of a vehicle parking surface lot or garage. The system performs vision-based parking inventory management (514) by monitoring live vehicle ingress and egress traffic and communicating to a backend server (70) changes in instances of vehicle parking events.Type: ApplicationFiled: June 4, 2018Publication date: April 30, 2020Inventors: Sohrab Vossoughi, Dave Cole, Massoud Mollaghaffari
-
Publication number: 20200134333Abstract: The present disclosure is directed to a traffic light recognition system and method for advanced driver assistance systems (ADAS) and robust to variations in illumination, partial occlusion, climate, shape and angle at which traffic light is viewed. The solution performs a real time recognition of traffic light by detecting the region of interest, where extracting the region of interest is achieved by projecting the sequence of frames into a kernel space, binarizing the linearly separated sequence of frames, identifying and classifying the region of interest as a candidate representative of traffic light. With the aforesaid combination of techniques used, traffic light can be conveniently recognized from amidst closely similar appearing objects such as vehicle headlights, tail or rear lights, lamp posts, reflections, street lights etc. with enhanced accuracy in real time.Type: ApplicationFiled: September 3, 2019Publication date: April 30, 2020Inventors: Kumar Vishal, Arvind Channarayapatna Srinivasa, Ritesh Mishra, Venugopal Gundimeda
-
Publication number: 20200134334Abstract: Provided are a method for setting an owner of an object based on sensing information regarding an internal environment of a vehicle when the object comes apart from a passenger, and an electronic apparatus therefor. In the present disclosure, one or more of an electronic apparatus, a vehicle, a vehicular terminal, and the autonomous driving vehicle may be associated with an artificial intelligence module, an unmanned aerial vehicle (UAV), a robot, an augmented reality (AR) device, a virtual reality (VR) device, a 5G service-related device, and the like.Type: ApplicationFiled: December 30, 2019Publication date: April 30, 2020Applicant: LG ELECTRONICS INC.Inventors: Hyunkyu KIM, Kibong SONG
-
Publication number: 20200134335Abstract: In a control apparatus for an automated driving vehicle, a receiving unit receives an external start signal that is a signal for starting automated driving in the automated driving vehicle and is transmitted from outside the automated driving vehicle. A control unit performs processes required for automated driving. A person determining unit determines whether a person is present inside a vehicle cabin of the automated driving vehicle. An operating unit is provided inside the vehicle cabin and receives a single or a plurality of operations for starting automated driving in the automated driving vehicle. In response to the person determining unit determining that a person is present inside the vehicle cabin, the control unit does not start automated driving even when the receiving unit receives the external start signal and starts automated driving only when an operation on the operating unit is performed.Type: ApplicationFiled: December 30, 2019Publication date: April 30, 2020Applicant: DENSO CORPORATIONInventors: Mitsuharu HIGASHITANI, Noriaki IKEMOTO, Tomomi HASE
-
Publication number: 20200134336Abstract: An apparatus for determining a visual confirmation target, the apparatus includes a gaze detection portion detecting a gaze direction of a driver for a vehicle, a vehicle information acquisition portion, an image acquisition portion acquiring a captured image, a gaze region extraction portion extracting a gaze region of the driver within the captured image based on a detection result of the gaze direction, a candidate detection portion recognizing objects included in the captured image, generating a top-down saliency map based on the captured image and the vehicle information, and detecting an object having saliency in the top-down saliency map among the recognized objects as a candidate for a visual confirmation target, and a visual confirmation target determination portion determining a visual confirmation target on a basis of an extraction result of the gaze region and a detection result of the candidate for the visual confirmation target.Type: ApplicationFiled: October 25, 2019Publication date: April 30, 2020Applicant: AISIN SEIKI KABUSHIKI KAISHAInventors: Shunsuke KOGURE, Shin OSUGA, Shinichiro MURAKAMI
-
Publication number: 20200134337Abstract: An emotion estimation apparatus includes a recording section that records one or more events that cause a change in an emotion of a person and prediction information for predicting, for each event, an occurrence of the event; an event predicting section that predicts the occurrence of the event, based on detection of the prediction information; and a frequency setting section that sets a frequency with which an estimation of the emotion is performed. If the occurrence of the event is predicted by the event predicting section, the frequency setting section sets the frequency to be higher than in a case where the occurrence of the event is not predicted, and also sets the frequency based on the content of the event.Type: ApplicationFiled: October 28, 2019Publication date: April 30, 2020Inventors: Kenji OKUMA, Takashi OKADA, Kota SAITO, Seungho CHOI, Yoshikazu MATSUO, Naoki KIKUCHI, Katsuya IKENOBU
-
Publication number: 20200134338Abstract: Embodiments of the present disclosure provides a method and an apparatus for controlling an unmanned vehicle, an electronic device and a computer readable storage medium. In this method, the computer device of the unmanned vehicle determines occurrence of an event associated with physical discomfort of a passenger in the vehicle. The computer device also determines a severity degree of the physical discomfort of the passenger. The computer device further controls a driving action of the vehicle based on the determined severity degree.Type: ApplicationFiled: October 30, 2019Publication date: April 30, 2020Inventor: Ya WANG
-
Publication number: 20200134339Abstract: Cameras capture time-stamped images of predefined areas. At least one image includes a representation of a portion of an individual with other portions of the individual occluded within the image. Pixel attributes for the portion of the individual are identified and provided as a box or set of coordinates for tracking the individual within the image and in subsequent images taken.Type: ApplicationFiled: October 30, 2018Publication date: April 30, 2020Inventors: Brent Vance Zucker, Adam Justin Lieberman
-
Publication number: 20200134340Abstract: Disclosed is a startup authentication method for an intelligent terminal, including first performing face authentication, and continuing to perform gesture-based virtual password authentication after the face authentication, wherein even if the face authentication is cracked, the gesture-based password authentication is required to perform for logging in, so the disclosure can effectively improve the security of authentication. Further, in the disclosure, the gesture-based virtual password authentication is performed based on a gesture image input by a user in the air, so that since there is no need to perform input operations on a screen of the intelligent terminal, the aesthetics of the intelligent terminal will not be affected. Moreover, in the disclosure, when the virtual password is determined by detecting binary images of fingertips, the disturbance of the binary images of the fingertips is also removed, which can improve the probability and efficiency in subsequent detection of the virtual password.Type: ApplicationFiled: December 31, 2019Publication date: April 30, 2020Inventor: Guohui Hu
-
Publication number: 20200134341Abstract: Disclosed is an intelligent terminal, for which the startup authentication includes first performing face authentication and continuing to perform gesture-based virtual password authentication after the face authentication, even if the face authentication is cracked, the gesture-based password authentication is required to perform for logging in, and so the intelligent terminal of the disclosure can effectively improve the security of authentication. Further, the gesture-based virtual password authentication is performed based on a gesture image input by a user in the air, so that since there is no need to perform input operations on a screen of the intelligent terminal, the aesthetics of the intelligent terminal will not be affected.Type: ApplicationFiled: December 31, 2019Publication date: April 30, 2020Inventor: GUOHUI HU
-
Publication number: 20200134342Abstract: The technology described in this document can be embodied in a method that includes receiving from a sensor, information indicative of an environmental condition. The method also includes receiving first information indicative of whether or not a first image captured by a first image acquisition device corresponds to an alternative representation of a live person, and receiving second information indicative of whether or not a second image captured by a second image acquisition device corresponds to the alternative representation. The first information and the second information are combined in a weighted combination, the corresponding weights being assigned in accordance with the environmental condition. A determination is made, based on the weighted combination, that a subject in the first and second images is an alternative representation of a live person, and in response, access to the secure system is prevented.Type: ApplicationFiled: October 26, 2018Publication date: April 30, 2020Applicant: Alibaba Group Holding LimitedInventors: Srikanth Parupati, Yash Joshi, Reza R. Derakhshani
-
Publication number: 20200134343Abstract: A collation device is configured to include a processor, and a storage unit that stores a predetermined determination condition in advance, under which a photographic image which is an image obtained by imaging a photograph of the subject is capable of being eliminated, the processor is configured to detect brightness distribution of a face image obtained by imaging an authenticated person with an imaging unit, determine whether or not the detected brightness distribution satisfies a determination condition, and perform face authentication using the face image satisfying the determination condition.Type: ApplicationFiled: May 30, 2018Publication date: April 30, 2020Applicant: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.Inventors: Megumi YAMAOKA, Takayuki MATSUKAWA
-
Publication number: 20200134344Abstract: The technology described in this document can be embodied in a method that includes a method for preventing access to a secure system based on determining a captured image to be of an alternative representation of a live person. The method includes illuminating a subject with structured light using a light source array comprising multiple light sources disposed in a predetermined pattern, capturing an image of the subject as illuminated by the structured light, and determining that the image includes features representative of the predetermined pattern. The method also includes, responsive to determining that the image includes features representative of the predetermined pattern, identifying the subject in the image to be an alternative representation of a live person. The method further includes responsive to identifying the subject in the image to be an alternative representation of a live person, preventing access to the secure system.Type: ApplicationFiled: October 25, 2018Publication date: April 30, 2020Applicant: Alibaba Group Holding LimitedInventors: Yash Joshi, Srikanth Parupati
-
Publication number: 20200134345Abstract: The technology described in this document can be embodied in a method for preventing access to a secure system based on determining a captured image to be of an alternative representation of a live person. The method includes capturing an image of a subject illuminated by an infrared (IR) illumination source, and extracting, from the image, a portion representative of an iris of the subject. The method also includes determining that an amount of high-frequency features in the portion of the image satisfies a threshold condition indicative of the image being of an alternative representation of a live person, and in response, identifying the subject in the image to be an alternative representation of a live person. Responsive to identifying the subject in the image to be an alternative representation of a live person, the method further includes preventing access to the secure system.Type: ApplicationFiled: October 26, 2018Publication date: April 30, 2020Applicant: Alibaba Group Holding LimitedInventors: Yash Joshi, Srikanth Parupati
-
Publication number: 20200134346Abstract: A system provides intelligent gallery management for biometrics. A first gallery is obtained that includes biometric and/or other information on a population of people. An application is identified. A subset of the population of people is identified based on the application. A second gallery is derived from the first gallery by pulling the information for the subset of the population of people without pulling the information for the population of people not in the subset. Biometric identification (such as facial recognition) for the application may then be performed using the second gallery rather than the first gallery. In this way, the system is improved as less time is required for biometric identification, fewer device resources are used, and so on.Type: ApplicationFiled: December 31, 2019Publication date: April 30, 2020Inventor: Kevin Lupowitz
-
Publication number: 20200134347Abstract: A system provides intelligent gallery management for biometrics. A first gallery is obtained that includes biometric and/or other information on a population of people. An application is identified. A subset of the population of people is identified based on the application. A second gallery is derived from the first gallery by pulling the information for the subset of the population of people without pulling the information for the population of people not in the subset. Biometric identification (such as facial recognition) for the application may then be performed using the second gallery rather than the first gallery. In this way, the system is improved as less time is required for biometric identification, fewer device resources are used, and so on.Type: ApplicationFiled: December 31, 2019Publication date: April 30, 2020Inventor: Kevin Lupowitz
-
Publication number: 20200134348Abstract: An example of apparatus includes a memory to store a first image of a document and a second image of the document. The first image and the second image are Memory captured under different conditions. The apparatus includes a processor coupled to the memory. The processor is to perform optical character recognition on the first image to generate a first output dataset and to perform optical character recognition on the second image to generate a second output dataset. The processor is further to determine whether consensus for a character is achieved based on a comparison of the first output dataset with the second output dataset, and generate a final output dataset based on the consensus for the character.Type: ApplicationFiled: July 21, 2017Publication date: April 30, 2020Applicant: Hewlett-Packard Development Company, L.P.Inventor: Mikhail Breslav
-
Publication number: 20200134349Abstract: A correction history recording unit that records region information of a correction site with respect to text data converted from an original image as correction history information, an accuracy calculation unit that calculates accuracy of optical character recognition for each of individual regions on a layout of the original image on the basis of the correction history information, a distribution image generation unit and a distribution image display unit which generate and display a distribution image in which a difference in magnitude of accuracy is shown as a difference in a display aspect for every individual region are included so as to generate and display the distribution image that is distinguished for every individual region by reflecting a tendency in which a character recognition rate in a certain region on a layout of the original image may decrease due to various cases including a format of an original document, a state of an OCR device, and the like.Type: ApplicationFiled: October 29, 2019Publication date: April 30, 2020Inventors: Yutaka NAGOYA, Ko Shimazawa
-
Publication number: 20200134350Abstract: An image processing apparatus includes detecting means for detecting a first detected region and a second detected region from an input image, on the basis of a first detection criterion and a second detection criterion, respectively; image setting means for setting, as a target image subjected to correction, an image including the first detected region, and setting, as a reference image that is referred to in the correction, an image including the second detected region; accepting means for accepting, from a user, designation of a region in the target image and a correction instruction for the designated region; correction region setting means for identifying, in the reference image, a region corresponding to the designated region, and for setting a to-be-corrected region on the basis of the identified region and the second detected region; and correcting means for correcting the first detected region in the target image on the basis of the to-be-corrected region set in the reference image.Type: ApplicationFiled: December 31, 2019Publication date: April 30, 2020Inventors: Kei Takayama, Masakazu Matsugu, Atsushi Nogami, Mahoro Anabuki
-
Publication number: 20200134351Abstract: Various embodiments include systems and methods of arthropod detection using an electronic arthropod detection device. The electronic arthropod detection device may scan a surface or a subject using a terahertz sensor that is sensitive to a terahertz band of electromagnetic radiation to detect the presence or likely presence of an arthropod in a region of interest (ROI). A camera sensitive to a visible band of electromagnetic radiation captures at least one image and provides the image(s) to an object detection model in response to determining that an arthropod is or is likely present in the ROI. A processor may initiate an arthropod detected procedure in response to detecting an arthropod in the ROI.Type: ApplicationFiled: November 5, 2019Publication date: April 30, 2020Inventors: Paul T. DIAMOND, Margery Fox KWART, Beau TIPPETTS
-
Publication number: 20200134352Abstract: Provided is a method for automatically performing image alignment without a user input. An image alignment method performed by an image alignment device, according to one embodiment of the present invention, can comprise the steps of: recognizing at least one person in an inputted image; determining a person-of-interest among the recognized persons; and performing image alignment, on the basis of the person-of-interest, on the inputted image, wherein the image alignment is performed without an input of a user of the image alignment device for the image alignment.Type: ApplicationFiled: December 27, 2019Publication date: April 30, 2020Applicant: KITTEN PLANET CO., LTD.Inventors: Jong Ho CHOI, Sung Jin PARK, Dong Jun LEE, Jee Yun LEE
-
Publication number: 20200134353Abstract: A monitoring-screen-data generation device includes an object-data generation unit, a screen-data generation unit, and an assignment processing unit. The object-data generation unit identifies a plurality of objects included in an image based on image data, and generates object data. The screen-data generation unit generates monitoring screen data on the basis of the object data. On the basis of definition data that defines a state transition and the object data, the assignment processing unit assigns data that defines the state transition to an image object included in a monitoring screen of the monitoring screen data.Type: ApplicationFiled: March 21, 2017Publication date: April 30, 2020Applicant: Mitsubishi Electric CorporationInventor: Shingo SODENAGA
-
Publication number: 20200134354Abstract: An apparatus, method and computer program product are provided for predicting feature space decay using variational auto-encoder networks. Methods may include: receiving a first image of a road segment including a feature disposed along the road segment; applying a loss function to the feature of the first image; generating a revised image, where the revised image includes a weathered iteration of the feature; generating a predicted image using interpolation between the image and the revised image of a partially weathered iteration of the feature; receiving a user image, where the user image is received from a vehicle traveling along the road segment; correlating a feature in the user image to the partially weathered iteration of the feature in the predicted image; and establishing that the feature in the user image is the feature disposed along the road segment.Type: ApplicationFiled: October 30, 2018Publication date: April 30, 2020Inventor: Anirudh VISWANATHAN
-
Publication number: 20200134355Abstract: An object of the present invention is to achieve both suppression of data amount of an image processing system that learns a collation image to be used for image identification using a discriminator and improvement of identification performance of the discriminator. In order to achieve the above object, there is proposed an image processing system including a discriminator that identifies an image using a collation image, the image processing system further including a machine learning engine that performs machine learning of collation image data required for image identification. The machine learning engine searches for a successfully identified image using an image for which identification has been failed, and adds information, obtained based on a partial image of the image for which identification has been failed and which has been selected by an input device to the successfully identified image obtained by the search to generate corrected collation image data.Type: ApplicationFiled: March 15, 2018Publication date: April 30, 2020Inventors: Shinichi SHINODA, Yasutaka TOYODA, Shigetoshi SAKIMURA, Masayoshi ISHIKAWA, Hiroyuki SHINDO, Hitoshi SUGAHARA
-
Publication number: 20200134356Abstract: In accordance with disclosed embodiments, an image processing method includes performing a first scan in a first direction on a first list of pixels in which, for each pixel in the first list, a feature point property is compared with a corresponding feature point property of each of a first set of neighboring pixels, performing a second scan in a second direction on the first list of pixels in which, for each pixel in the first list, a feature point property is compared with a corresponding feature point property of each of a second set of neighboring pixels, using the results of the first and second scans to identify pixels from the first list to be suppressed, and forming a second list of pixels that includes pixels from the first list that are not identified as pixels to be suppressed. The second list represents a non-maxima suppressed list.Type: ApplicationFiled: December 30, 2019Publication date: April 30, 2020Inventors: Deepak Kumar Poddar, Pramod Kumar Swami, Prashanth Viswanath
-
Publication number: 20200134357Abstract: Systems and methods for neural-network-based optical character recognition using specialized confidence functions. An example method comprises: receiving a grapheme image; computing, by a neural network, a feature vector representing the grapheme image in a space of image features; and computing a confidence vector associated with the grapheme image, wherein each element of the confidence vector reflects a distance, in the space of image features, between the feature vector and a center of a class of a set of classes, wherein the class is identified by an index of the element of the confidence vector.Type: ApplicationFiled: November 2, 2018Publication date: April 30, 2020Inventor: Aleksey Zhuravlev
-
Publication number: 20200134358Abstract: A system and processing methods for refining a convolutional neural network (CNN) to capture characterizing features of different classes are disclosed. In some embodiments, the system is programmed to start with the filters in one of the last few convolutional layers of the initial CNN, which often correspond to more class-specific features, rank them to hone in on more relevant filters, and update the initial CNN by turning off the less relevant filters in that one convolutional layer. The result is often a more generalized CNN that is rid of certain filters that do not help characterize the classes.Type: ApplicationFiled: October 24, 2019Publication date: April 30, 2020Inventors: YING SHE, WEI GUAN
-
Publication number: 20200134359Abstract: A cluster visualization apparatus is disclosed. A cluster visualization apparatus according to the present disclosure includes a state detector configured to obtain state information of a cluster configured with a plurality of boxes, a display, and a controller configured to display a three-dimensional model image configured with a plurality of layers corresponding to a plurality of network layers and to display an image corresponding to each of the plurality of boxes over at least one layer of the plurality of layers, based on the state information.Type: ApplicationFiled: March 13, 2018Publication date: April 30, 2020Inventors: Jong Won KIM, Taek Ho NAM, Ki Moon KIM
-
Publication number: 20200134360Abstract: Dimensionality reduction in high-dimensional datasets can decrease computation time, and processes for dimensionality reduction may even be useful in lower-dimensional datasets. It has been discovered that methods of dimensionality reduction may dramatically decrease computational requirements in machine learning programming techniques. This development unlocks the ability of computational modeling to be used to solve complex problems that, in the past, would have required computation time on orders of magnitude too great to be useful.Type: ApplicationFiled: July 6, 2017Publication date: April 30, 2020Inventors: Patrick Lilley, Michael John Colbus
-
Publication number: 20200134361Abstract: The method includes: obtaining a plurality of pieces of feature data; automatically performing two different types of nonlinear combination processing operations on the plurality of pieces of feature data to obtain two groups of processed data, where the two groups of processed data include a group of higher-order data and a group of lower-order data, the higher-order data is related to a nonlinear combination of m pieces of feature data in the plurality of pieces of feature data, and the lower-order data is related to a nonlinear combination of n pieces of feature data in the plurality of pieces of feature data, where m?3, and m>n?2; and determining prediction data based on a plurality of pieces of target data, where the plurality of pieces of target data include the two groups of processed data.Type: ApplicationFiled: December 27, 2019Publication date: April 30, 2020Inventors: Ruiming TANG, Huifeng GUO, Zhenguo LI, Xiuqiang HE
-
Publication number: 20200134362Abstract: Disclosed is system and method of connection information regularization, graph feature extraction and graph classification based on adjacency matrix. By concentrating the connection information elements in the adjacency matrix into a specific diagonal region of the adjacency matrix in order to reduce the non-connection information elements in advance. The subgraph structure of the graph is further extracted along the diagonal direction using the filter matrix. Then a stacked convolutional neural network is used to extract a larger subgraph structure. On the one hand, it greatly reduces the amount of computation and complexity, solving the limitations of the computational complexity and the limitations of window size. And on the other hand, it can capture large subgraph structure through a small window, as well as deep features from the implicit correlation structures at both vertex and edge level, which improves the accuracy and speed of the graph classification.Type: ApplicationFiled: December 26, 2019Publication date: April 30, 2020Inventors: Zhiling LUO, Jianwei YIN, Zhaohui WU, Shuiguang DENG, Ying LI, Jian WU
-
Publication number: 20200134363Abstract: Methods, systems, and devices for automated feature selection and model generation are described. A device (e.g., a server, user device, database, etc.) may perform model generation for an underlying dataset and a specified outcome variable. The device may determine relevance measurements (e.g., stump R-squared values) for a set of identified features of the dataset and can reduce the set of features based on these relevance measurements (e.g., according to a double-box procedure). Using this reduced set of features, the device may perform a least absolute shrinkage and selection operator (LASSO) regression procedure to sort the features. The device may then determine a set of nested linear models—where each successive model of the set includes an additional feature of the sorted features—and may select a “best” linear model for model generation based on this set of models and a model quality criterion (e.g., an Akaike information criterion (AIC)).Type: ApplicationFiled: October 31, 2018Publication date: April 30, 2020Inventor: Paul Walter Hubenig
-
Simultaneous Hyper Parameter and Feature Selection Optimization Using Evolutionary Boosting Machines
Publication number: 20200134364Abstract: Aspects relate to a machine learning system implementing an evolutionary boosting machine. The system may initially select randomized feature sets for an initial generation of candidate models. Evolutionary algorithms may be applied to the system to create later generations of the cycle, combining and mutating the feature selections of the candidate models. The system may determine optimal number of boosting iterations for each candidate model in a generation by building boosting iterations from an initial value up to a predetermined maximum number of boosting iterations. When a final generation is achieved, the system may evaluate the optimal model of the generation. If the optimal boosting iterations of the optimal model does not meet solution constraints on the optimal boosting iterations, the system may adjust a learning rate parameter and then proceed to the next cycle. Based on termination criteria, the system may determine a resulting/final optimal mode.Type: ApplicationFiled: December 13, 2018Publication date: April 30, 2020Inventor: Ousef Kuruvilla -
Publication number: 20200134365Abstract: An instance segmentation method includes: performing feature extraction on an image via a neural network to output features at at least two different hierarchies; extracting region features corresponding to at least one instance candidate region in the image from the features at the at least two different hierarchies, and fusing region features corresponding to a same instance candidate region, to obtain a first fusion feature of each instance candidate region; and performing instance segmentation based on each first fusion feature, to obtain at least one of an instance segmentation result of the corresponding instance candidate region or an instance segmentation result of the image.Type: ApplicationFiled: December 29, 2019Publication date: April 30, 2020Applicant: BEIJING SENSETIME TECHNOLOGY DEVELOPMENT CO., LTD.Inventors: Shu LIU, Lu QI, Haifang QIN, Jianping SHI, Jiaya JIA
-
Publication number: 20200134366Abstract: An object recognition method and apparatus for a deformed image are provided. The method includes: inputting an image into a preset localization network to obtain a plurality of localization parameters for the image, wherein the preset localization network comprises a preset number of convolutional layers, and wherein the plurality of localization parameters are obtained by regressing image features in a feature map that is generated from a convolution operation on the image; performing a spatial transformation on the image based on the plurality of localization parameters to obtain a corrected image; and inputting the corrected image into a preset recognition network to obtain an object classification result for the image. In the process of the neural network based object recognition, the embodiment of the present application first transforms the deformed image that has deformation, and then performs the object recognition on the transformed image.Type: ApplicationFiled: June 12, 2018Publication date: April 30, 2020Inventors: Yunlu XU, Gang ZHENG, Zhanzhan CHENG, Yi NIU
-
Publication number: 20200134367Abstract: A method is provided that includes generating a visual environment for interactive development of a machine learning (ML) model. The method includes accessing observations of data each of which includes values of independent variables and a dependent variable, and performing an interactive exploratory data analysis (EDA) of the values of a set of the independent variables. The method includes performing an interactive feature construction and selection based on the interactive EDA, and in which select independent variables are selected as or transformed into a set of features for use in building a ML model to predict the dependent variable. The method includes building the ML model using a ML algorithm, the set of features, and a training set produced from the set of features and observations of the data. And the method includes outputting the ML model for deployment to predict the dependent variable for additional observations of the data.Type: ApplicationFiled: October 25, 2018Publication date: April 30, 2020Inventors: Seema Chopra, Akshata Kishore Moharir, Arvind Sundararaman, Kaustubh Kaluskar