Patents Issued in April 28, 2022
-
Publication number: 20220129656Abstract: A vehicle includes a first sensor including a capacitance sensor; a second sensor including an ultrasonic sensor; a storage configured to store ultrasonic pattern images; and a controller electrically connected to the first sensor, the second sensor and the storage and configured to: wake up the second sensor based on a user contiguous to the first sensor, obtain a fingerprint pattern image from the second sensor, according to a difference between an area value of a ridge area of the fingerprint pattern image and an area value of a valley area adjacent to the ridge area being less than a predetermined reference value, obtain result data by assigning a weight to the area value of the valley area, and according to the result data and the ultrasonic pattern image data matching more than a predetermined matching value by comparing the result data and ultrasonic pattern image data, recognize as a fingerprint corresponding to the user.Type: ApplicationFiled: August 27, 2021Publication date: April 28, 2022Applicants: Hyundai Motor Company, Kia CorporationInventors: Jihye LEE, Taeseung KIM, Dong June SONG
-
Publication number: 20220129657Abstract: A fingerprint sensor package includes a package substrate including an upper surface in which a sensing region and a peripheral region surrounding the sensing region are defined, and a lower surface facing the upper surface; a plurality of first sensing patterns located are arranged in the sensing region, are apart from each other in a first direction, and extend in a second direction crossing the first direction; a plurality of second sensing patterns that are arranged in the sensing region, are apart from each other in the second direction, and extend in the first direction; a coating member covering the sensing region; an upper ground pattern in the peripheral region and apart from the coating member to surround the coating member in the first and second directions; and a controller chip on the lower surface of the package substrate; and a plurality of capacitors.Type: ApplicationFiled: October 13, 2021Publication date: April 28, 2022Applicant: Samsung Electronics Co., Ltd.Inventors: Jaehyun LIM, Younghwan PARK, Kwangjin LEE, Inho CHOI, Hyuntaek CHOI
-
Publication number: 20220129658Abstract: An ultrasonic sensor comprises a substrate, a piezoelectric member disposed at the substrate and an upper electrode disposed on the piezoelectric member. The upper electrode includes a silver paste.Type: ApplicationFiled: October 21, 2021Publication date: April 28, 2022Applicant: LG Display Co., Ltd.Inventor: Kyungyeol RYU
-
Publication number: 20220129659Abstract: An optical fingerprint sensor is provided. The optical fingerprint sensor includes a backplate structure layer, a pixel defining layer, and an organic photoelectric sensing layer, wherein the pixel defining layer is disposed on a side of the backplate structure layer; and a non-pixel region of the pixel defining layer is provided with a first non-pixel hole, and the organic photoelectric sensing layer is disposed in the first non-pixel hole.Type: ApplicationFiled: October 18, 2021Publication date: April 28, 2022Inventors: Jing WANG, Ming LIU, Zheng LIU, Hongwei TIAN, Yibing FAN, Jia ZHAO
-
Publication number: 20220129660Abstract: A system and a method of calculating coordinates of a pupil center point are provided. The system for acquiring the coordinates of the pupil center point includes a first camera, a second camera, a storage and a processor. The first camera is configured to capture a first image including a face and output the first image to the processor, the second camera is configured to capture a second image including a pupil and output the second image to the processor, a resolution of the first camera is smaller than a resolution of the second camera, and the storage is configured to store processing data, and the processor is configured to: acquire the first image and the second image; extract a first eye region corresponding to an eye from the first image; convert the first eye region into the second image, to acquire a second eye region corresponding to the eye in the second image; and detect a pupil in the second eye region and acquire the coordinates of the pupil center point.Type: ApplicationFiled: May 13, 2021Publication date: April 28, 2022Applicants: BEIJING BOE OPTOELECTRONICS TECHNOLOGY CO., LTD., BOE TECHNOLOGY GROUP CO., LTD.Inventors: Yachong XUE, Hao ZHANG, Lili CHEN, Jiankang SUN, Xinkai LI, Guixin YAN, Xiaolei LIU, Yaoyu LV, Menglei ZHANG
-
Publication number: 20220129661Abstract: The present invention discloses a data collection method, an unmanned aerial vehicle (UAV) and a storage medium. The method is used for a vision chip of the UAV, the vision chip including a main operating system and a real-time operating system, and the method includes: generating, by the real-time operating system, a trigger signal; collecting, by the real-time operating system based on the trigger signal, flight control data of the UAV and controlling an image sensor to collect an image sequence; synchronizing, by the real-time operating system, a time of the main operating system with a time of the real-time operating system; and performing, by the main operating system, visual processing on the flight control data and the image sequence, to ensure that the flight control data and the image sequence are collected synchronously. By using the method, accuracy of data collected during controlling of the UAV can be improved.Type: ApplicationFiled: October 27, 2020Publication date: April 28, 2022Inventor: Zhaozao LI
-
Publication number: 20220129662Abstract: Apparatus and methods for determining information about one or more objects in a 3-dimensional (3D) space are disclosed. One aspect of the method includes defining a virtual ground plane within a monitored 3D space. The virtual ground plane is divided into a plurality of bins. Each bin has corresponding counter value. An object is detected in a respective image captured by each of a plurality of sensors. A respective line segment is selected corresponding to a respective light between each of the plurality of image sensors and the detected object. One or more bins of the virtual ground plane are selected onto which a respective projected line segment of each respective line segment overlap. Each counter value for each of the one or more selected bins is increased. A location of the object is determined based on a bin of the one or more bins having a highest counter value.Type: ApplicationFiled: October 22, 2021Publication date: April 28, 2022Inventor: Zhiqian WANG
-
Publication number: 20220129663Abstract: In one or more embodiments, one or more systems, one or more methods, and/or one or more processes may determine that a user is in a presence of an information handling system (IHS); determine a digital image of a face of the user; determine an angle of the face of the user with respect to a vertical axis of a camera based at least on the digital image; determine that the face is facing a display associated with the IHS; determine an amount of time, which the user spends looking at the display; determine, via multiple sensors associated with the IHS, a heart rate and a respiratory rate associated with the user; determine that the user should move based at least on the first amount of time, the heart rate, the respiratory rate, and the angle; and display information indicating that the user should move.Type: ApplicationFiled: October 22, 2020Publication date: April 28, 2022Inventor: Karunakar Palicherla Reddy
-
Publication number: 20220129664Abstract: A deepfake video detection system, including an input data detection module of a video recognition unit for setting a target video; a data pre-processing unit for detecting eye features from the face in the target video; a feature extraction module for extracting eye features and inputting the eye features to a long-term recurrent convolutional neural network (LRCN); and then using a sequence of long-term and short-term memory (LSTM) of a learning module; performing sequence learning; using a state prediction module to predict the output of each neuron, and then using a long and short-term memory model to output the quantized eye state, then connecting to a state quantification module, and comparing the original stored data from the normal video and the quantified eye state information of the target video, and outputting the recognition result by an output data recognition module.Type: ApplicationFiled: May 20, 2021Publication date: April 28, 2022Inventors: Jung-Shian LI, I-Hsien Liu, Chuan-Kang Liu, Po-Yi Wu, Yen-Chu Peng
-
Publication number: 20220129665Abstract: The present disclosure provides a method for training a convolutional neural network and a method and device for face recognition. By arranging a first training sample set including a first data set and a second data set, the convolutional neural network is trained and may be applied to the face recognition method, an initial face image frame may be extracted from a face detection image, in the face recognition method, a portion above eyes is cropped from the initial face image frame to serve as a target face detection image, thus according to the target face detection image, predicted identity information corresponding to the target face detection image is determined, and then the target face detection image is compared with a face reference image corresponding to the predicted identity information for face recognition.Type: ApplicationFiled: June 18, 2021Publication date: April 28, 2022Inventor: Ruibin XUE
-
Publication number: 20220129666Abstract: Embodiments of the present disclosure disclose an anomaly detector for detecting an anomaly in a sequence of poses of a human performing an activity. The anomaly detector includes an input interface configured to accept input data indicative of a distribution of the sequence of poses, a memory configured to store a discriminative one-class classifier having a pair of complementary classifiers bounding normal distribution of pose sequences in a reproducing kernel Hilbert space (RKHS), a processor configured to embed the input data into an element of the RKHS and classify the embedded data using the discriminative one-class classifier, and an output interface configured to render a classification result.Type: ApplicationFiled: October 26, 2020Publication date: April 28, 2022Applicant: Mitsubishi Electric Research Laboratories, Inc.Inventors: Anoop Cherian, Jue Wang
-
Publication number: 20220129667Abstract: A method, apparatus, system, and computer program product for training a gesture recognition machine learning model system. Temporal images for a set of gestures used for ground operations for an aircraft are identified by a computer system. Pixel variation data identifying movement on a per image basis from the temporal images is generated by the computer system. The temporal images and the pixel variation data form training data. A set of feature machine learning models is trained by the computer system to recognize features using the training data.Type: ApplicationFiled: October 25, 2021Publication date: April 28, 2022Inventors: Amir Afrasiabi, Kwang Hee Lee, Bhargavi Patel, Young Suk Cho, Junghyun Oh
-
Publication number: 20220129668Abstract: In various embodiments, a device of a video conferencing system obtains a stream of video data depicting a participant of a video conference. The device analyzes the stream of video data to detect motion of the participant. The device identifies, by analyzing the motion of the participant using a machine learning model, the motion of the participant as clapping by the participant. The device provides the indication that the participant is clapping to one or more user interfaces of the video conferencing system.Type: ApplicationFiled: October 27, 2020Publication date: April 28, 2022Inventors: William Edward Reed, Qian Yu
-
Publication number: 20220129669Abstract: A system and method for providing multi-camera 3D body part labeling and performance metrics includes receiving 2D image data and 3D depth data from a plurality image capture units (ICUs) each indicative of a scene viewed by the ICUs, the scene having at least one person, each ICU viewing the person from a different viewing position, determining 3D location data and visibility confidence level for the body parts from each ICU, using the 2D image data and the 3D depth data from each ICU, transforming the 3D location data for the body parts from each ICU to a common reference frame for body parts having at least a predetermined visibility confidence level, averaging the transformed, visible 3D body part locations from each ICU, and determining a performance metric of at least one of the body parts using the averaged 3D body part locations. The person may be a player in a sports scene.Type: ApplicationFiled: October 22, 2020Publication date: April 28, 2022Inventors: Jayadas Devassy, Peter Walsh
-
Publication number: 20220129670Abstract: A distractor detector includes a heatmap network and a distractor classifier. The heatmap network operates on an input image to generate a heatmap for a main subject, a heatmap for a distractor, and optionally a heatmap for the background. Each object is cropped within the input image to generate a corresponding cropped image. Regions within the heatmaps that correspond to the objects are identified, and each of the regions is cropped within each of the heatmaps to generate cropped heatmaps. The distractor classifier then operates on the cropped images and the cropped heatmaps to classify each of the objects as being either a main subject or a distractor.Type: ApplicationFiled: October 28, 2020Publication date: April 28, 2022Inventors: ZHE LIN, LUIS FIGUEROA, ZHIHONG DING, SCOTT COHEN
-
Publication number: 20220129671Abstract: Disclosed herein are system, method, and computer program product embodiments for document information extraction without additional annotations. An embodiment operates by receiving an input representing a document and a key. The embodiment processes the input using a convolutional neural network to obtain a feature map. The embodiment combines the feature map with positional information to obtain a spatial-aware feature map. The embodiment then repeatedly performs the following decoding process: generate attention weights, generate a context vector based on the spatial-aware feature map and the generated attention weights using an attention layer, process the context vector, the key, and an input vector using a recurrent neural network (RNN) to obtain a RNN state, and generate an output vector based on the RNN state and the context vector using a projection layer. The embodiment then extracts a field based on the result of the decoding process.Type: ApplicationFiled: October 22, 2020Publication date: April 28, 2022Inventors: Shachar Klaiman, Marius Lehne
-
Publication number: 20220129672Abstract: There is provided an identification method acquiring a first image, a pixel value of each of pixels of which represents a distance from a first position to an imaging target object including an identification target object, acquiring a second image captured from the first position or a second position different the first position, a pixel value of each of pixels of the second image representing at least luminance of reflected light from the imaging target object, identifying a type of the identification target object based on the second image, and calculating, based on the first image, an indicator value indicating a reliability degree of an identification result of the type of the identification target object based on the second image.Type: ApplicationFiled: October 25, 2021Publication date: April 28, 2022Applicant: SEIKO EPSON CORPORATIONInventors: Takumi OIKE, Akira IKEDA
-
Publication number: 20220129673Abstract: Implementations are disclosed for selectively operating edge-based sensors and/or computational resources under circumstances dictated by observation of targeted plant trait(s) to generate targeted agricultural inferences. In various implementations, triage data may be acquired at a first level of detail from a sensor of an edge computing node carried through an agricultural field. The triage data may be locally processed at the edge using machine learning model(s) to detect targeted plant trait(s) exhibited by plant(s) in the field. Based on the detected plant trait(s), a region of interest (ROI) may be established in the field. Targeted inference data may be acquired at a second, greater level of detail from the sensor while the sensor is carried through the ROI. The targeted inference data may be locally processed at the edge using one or more of the machine learning models to make a targeted inference about plants within the ROI.Type: ApplicationFiled: October 22, 2020Publication date: April 28, 2022Inventors: Sergey Yaroshenko, Zhiqiang Yuan
-
Publication number: 20220129674Abstract: A method and a device for determining an extraction model of a green tide coverage ratio based on mixed pixels. The method includes acquiring sample truth values respectively corresponding to a plurality of target regions water body and green tide; acquiring a plurality of first remote sensing data of a first satellite sensor, the plurality of remote sensing data are in one-to-one correspondence with the plurality of target regions; determining reflection index sets respectively corresponding to the plurality of target regions according to the plurality of remote sensing data; and determining the extraction model of the green tide coverage ratio corresponding to the first satellite sensor according to the sample truth value corresponding to each of the plurality of target regions and the reflection index set corresponding to each of the plurality of target regions.Type: ApplicationFiled: July 12, 2021Publication date: April 28, 2022Applicant: The Second Institute of Oceanography (SIO), MNRInventors: Difeng WANG, Xiaoguang HUANG, Fang GONG, Yan BAI, Xianqiang HE
-
Publication number: 20220129675Abstract: An information processing apparatus comprises a first selection unit configured to select, as at least one candidate learning model, at least one learning model from a plurality of learning models learned under learning environments different from each other based on information concerning image capturing of an object, a second selection unit configured to select at least one candidate learning model from the at least one candidate learning model based on a result of object detection processing by the at least one candidate learning model selected by the first selection unit, and a detection unit configured to perform the object detection processing for a captured image of the object using at least one candidate learning model of the at least one candidate learning model selected by the second selection unit.Type: ApplicationFiled: October 25, 2021Publication date: April 28, 2022Inventors: Masafumi Takimoto, Tatsuya Yamamoto, Eita Ono, Satoru Mamiya, Shigeki Hirooka
-
Publication number: 20220129676Abstract: Provided is an information providing method in which a terminal apparatus including an imaging unit and a display unit displays, on the display unit, an image of a vehicle interior captured by the imaging unit. A plurality of devices are installed in the vehicle interior and classified into a plurality of types. The information providing method includes an image acquisition step of acquiring the image of the vehicle interior captured by the imaging unit, and a display step of displaying the image on the display unit, and in a case where at least one of the plurality of devices is included in the image, displaying a graphic corresponding to the device in a superimposed manner on the device and also displaying information indicating a type to which the device belongs.Type: ApplicationFiled: October 27, 2021Publication date: April 28, 2022Inventors: Masahide KOBAYASHI (deceased), Daisuke YAMAOKA, Yoshiaki NEDACHI
-
Publication number: 20220129677Abstract: Systems, devices, and methods related to video analysis using an Artificial Neural Network (ANN)) are described. For example, a data storage device can be configured to perform the computation of an ANN to recognize or classify features captured in the video images. The recognition or classification results of a prior video frame can be used to accelerate the analysis of the next video frame. The ANN can be organized in layers, where the intermediate result of a current layer can be further analyzed by a next layer for improved accuracy and confidence level. Before or while processing using the next layer, the intermediate result can be compared to the results obtained for the prior frame. If, in view of the results of the prior frame, the confidence level of the intermediate result is boosted to above a threshold, the subsequent layer(s) can be skipped or terminated early.Type: ApplicationFiled: October 22, 2020Publication date: April 28, 2022Inventor: Poorna Kale
-
Publication number: 20220129678Abstract: A method includes receiving a query from a user requesting a prediction on future location of a target at a time t>0, receiving a trafficability map wherein the target is located in the trafficability map at an initial time t=0, receiving information about the target including uncertainty on speed of the target, modeling motion of the target using information from the trafficability map including terrain and target mobility on different terrain types to locations on the trafficability map, and answering the query from the user based on output from modeling motion of the target.Type: ApplicationFiled: December 4, 2020Publication date: April 28, 2022Applicant: Goodrich CorporationInventor: Suhail Shabbir Saquib
-
Publication number: 20220129679Abstract: Machine learning-based techniques for summarizing collections of data such as image and video data leveraging side information obtained from related (e.g., video) data are provided. In one aspect, a method for video summarization includes: obtaining related videos having content related to a target video; and creating a summary of the target video using information provided by the target video and side information provided by the related videos to select portions of the target video to include in the summary. The side information can include video data, still image data, text, comments, natural language descriptions, and combinations thereof.Type: ApplicationFiled: October 27, 2020Publication date: April 28, 2022Inventors: Rameswar Panda, Chuang Gan, Pin-Yu Chen, Bo Wu
-
Publication number: 20220129680Abstract: Methods, systems and computer program products, for processing a stream of image frames captured by a camera system. A hardcoded alert image frame is generated in response to detecting an event. The hardcoded alert image frame includes motion deltas and/or color changes with respect to an event image frame. A stream of encoded image frames is generated, in which stream the hardcoded alert image frame is inserted in display order after the encoded event image frame.Type: ApplicationFiled: October 14, 2021Publication date: April 28, 2022Applicant: Axis ABInventors: Viktor EDPALM, Song YUAN, Adnan SALEEM, Rodrigo SUCH
-
Publication number: 20220129681Abstract: A system and method are disclosed that allow for early detection and management of wildfires. According to various aspects and embodiments of the invention the system and method rapidly and economically detect and help manage and contain, control, suppress the progress of wildfires.Type: ApplicationFiled: October 28, 2021Publication date: April 28, 2022Inventor: Preet ANAND
-
Publication number: 20220129682Abstract: Methods and systems for fully-automatic image processing to detect and remove unwanted people from a digital image of a photograph. The system includes the following modules: 1) Deep neural network (DNN)-based module for object segmentation and head pose estimation; 2) classification (or grouping) of wanted versus unwanted people based on information collected in the first module; 3) image inpainting of the unwanted people in the digital image. The classification module can be rules-based in an example. In an example, the DNN-based module generates, from the digital image: 1. A list of object category labels, 2. A list of object scores, 3. A list of binary masks, 4. A list of object bounding boxes, 5. A list of crowd instances, 6. A list of human head bounding boxes, and 7. A list of head poses (e.g., yaws, pitches, and rolls).Type: ApplicationFiled: October 23, 2020Publication date: April 28, 2022Inventors: Qiang TANG, Zili YI, Zhan XU
-
Publication number: 20220129683Abstract: Systems and methods analyze a data set including a plurality of images. In one implementation, at least one processor receives a plurality of images acquired by one or more cameras associated with at least one vehicle; and analyzes the plurality of images using an active learning system configured to determine a relative priority ranking among the plurality of images. The relative priority ranking indicates an ordered sequence for the plurality of images, and is determined based on at least one indicator, determined for each of the plurality of images, of a complexity level and a diversity level associated with representations of one or more objects represented in the plurality of images. The at least one processor then outputs information indicating the relative priority ranking among the plurality of images.Type: ApplicationFiled: October 22, 2021Publication date: April 28, 2022Inventor: Galit Levin
-
Publication number: 20220129684Abstract: Systems and methods for object detection. Object detection may be used to control autonomous vehicle(s). For example, the methods comprise: obtaining, by a computing device, a LiDAR dataset generated by a LiDAR system of autonomous vehicle; and using, by the computing device, the LiDAR dataset and image(s) to detect an object that is in proximity to the autonomous vehicle. The object is detected by performing the following operations: computing a distribution of object detections that each point of the LiDAR dataset is likely to be in; creating a plurality of segments of LiDAR data points using the distribution of object detections; merging the plurality of segments of LiDAR data points to generate merged segments; and detecting the object in a point cloud defined by the LiDAR dataset based on the merged segments. The object detection may be used by the computing device to facilitate at least one autonomous driving operation.Type: ApplicationFiled: October 23, 2020Publication date: April 28, 2022Inventors: Arsenii Saranin, Basel Alghanem, G. Peter K. Carr
-
Publication number: 20220129685Abstract: System and method for object detection. Images from cameras are provided to an inference engine to detect objects in real time, providing the images to an inference engine to detect the non-background and background pixels of the objects in the images, determining the position and size of the objects in the images based on contemporaneously gathered LiDAR data and the relationship of non-background to background pixels.Type: ApplicationFiled: October 20, 2021Publication date: April 28, 2022Inventors: Raajitha Gummadi, Abhayjeet S. Juneja
-
Publication number: 20220129686Abstract: A computer includes a processor and a memory, the memory storing instructions executable by the processor to determine respective probabilities of a direction of a gaze of a vehicle occupant toward each of a plurality of points in an image, determine a gaze distance from a center of the image based on the probabilities, and, upon determining that the gaze distance exceeds a threshold, suppress manual control of at least one vehicle component.Type: ApplicationFiled: October 22, 2020Publication date: April 28, 2022Applicant: Ford Global Technologies, LLCInventors: Kyoung Min Lee, Parthasarathy Subburaj, Durga Priya Kumar, Tilak D
-
Publication number: 20220129687Abstract: Systems and methods for detecting symptoms of occupant illness is disclosed herein. In embodiments, a storage is configured to maintain a visualization application and data from one or more sources, such as an audio source, an image source, and/or a radar source. A processor is in communication with the storage and a user interface. The processor is programmed to receive data from the one or more sources, execute human-detection models based on the received data, execute activity-recognition models to recognize symptoms of illness based on the data from the one or more sources, determine a location of the recognized symptoms, and execute a visualization application to display information in the user interface. The visualization application can show a background image with an overlaid image that includes an indicator for each location of recognized symptom of illness. Additionally, data from the audio source, image source, and/or radar source can be fused.Type: ApplicationFiled: October 23, 2020Publication date: April 28, 2022Inventors: Sirajum MUNIR, Samarjit DAS, Yunze ZENG, Vivek JAIN
-
Publication number: 20220129688Abstract: Methods and systems are presented for extracting categorizable information from an image using a graph that models data within the image. Upon receiving an image, a data extraction system identifies characters in the image. The data extraction system then generates bounding boxes that enclose adjacent characters that are related to each other in the image. The data extraction system also creates connections between the bounding boxes based on locations of the bounding boxes. A graph is generated based on the bounding boxes and the connections such that the graph can accurately represent the data in the image. The graph is provided to a graph neural network that is configured to analyze the graph and produce an output. The data extraction system may categorize the data in the image based on the output.Type: ApplicationFiled: October 22, 2020Publication date: April 28, 2022Inventors: Xiaodong Yu, Hewen Wang
-
Publication number: 20220129689Abstract: A method and apparatus for face recognition robust to an alignment of the face comprising: estimating prior information of a facial shape from an input image cropped from an image including a face using the first deep neural network (DNN); extracting feature information of facial appearance from the input image by using a second DNN; training, by using a face image decoder based on the prior information and the feature information, the face recognition apparatus; and extracting, from a test image, facial shape-aware features in the inference step by using the trained second DNN.Type: ApplicationFiled: October 28, 2021Publication date: April 28, 2022Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Hyungil KIM, Kimin YUN, Yongjin KWON, Jin Young MOON, Jongyoul PARK, Kang Min BAE, Sungchan OH, Youngwan LEE
-
Publication number: 20220129690Abstract: There is provided an identification method including acquiring a first image, a pixel value of each of pixels of which represents a distance from a first position to a first imaging target object including a background object and an identification target object, acquiring a second image captured from the first position or a second position different the first position, a pixel value of each of pixels of the second image representing at least luminance of reflected light from the first imaging target object, specifying, based on the first image, a first region occupied by the identification target object in the first image, and identifying a type of the identification target object based on an image of a second region corresponding to the first region in the second image.Type: ApplicationFiled: October 26, 2021Publication date: April 28, 2022Applicant: SEIKO EPSON CORPORATIONInventors: Akira IKEDA, Takumi OIKE
-
Publication number: 20220129691Abstract: A non-standard user interface object identification system includes an object candidate extractior that extracts one or more objects from an image, a first similarity analyzer that determines object type candidates of the one or more objects in accordance with similarities between the one or more objects and a standard user interface (UI) element, a second similarity analyzer that selects object type-specific weight values in accordance with layout characteristics of the one or more objects and determines object types of the one or more objects using the object type candidates and the object type-specific weight values, and an object identifier that receives type and characteristic information of a search target object and identifies the search target object in accordance with characteristic information and the object types of the one or more objects.Type: ApplicationFiled: October 27, 2021Publication date: April 28, 2022Inventors: Hyo Young KIM, Koo Hyun PARK, Keun Taek PARK
-
Publication number: 20220129692Abstract: A computer readable recording medium storing at least one program, wherein an image pattern determining method for determining a stripe image can be performed when the program is executed. The image pattern determining method comprises: (a) classifying a single target image to a plurality of image blocks, wherein each of the image blocks comprises at least one pixel; (b) calculating pixel differences between pixel value sums of at least one of the image blocks and neighboring image blocks of the image block; (c) calculating image variation levels of each of the image blocks in a plurality of directions according to the pixel differences; and (d) determining whether the single target image comprises the strip image or not according to the image variation levels. An image pattern determining method for determining a check board image is also disclosed.Type: ApplicationFiled: October 25, 2020Publication date: April 28, 2022Inventor: Joon Chok LEE
-
Publication number: 20220129693Abstract: According to one embodiment, a state determination apparatus includes a processor. The processor acquires a targeted image. The processor acquires a question concerning the targeted image and an expected answer to the question. The processor generates an estimated answer estimated with respect to the question concerning the targeted image using a trained model trained to estimate an answer based on a question concerning an image. The processor determines a state of a target for determination in accordance with a similarity between the expected answer and the estimated answer.Type: ApplicationFiled: August 30, 2021Publication date: April 28, 2022Applicant: KABUSHIKI KAISHA TOSHIBAInventors: Quoc Viet PHAM, Toshiaki NAKASU, Nao MISHIMA, Shojun NAKAYAMA
-
Publication number: 20220129694Abstract: An electronic device and a method for screening a sample are provided. The method includes the following steps. N samples corresponding to a first object are received, in which the N samples include a first sample. N similarity vectors respectively corresponding to the N samples are calculated, in which the N similarity vectors include a first similarity vector corresponding to the first sample. The first similarity vector includes multiple first similarities between the first sample and each of the N samples except the first sample. The first sample is determined to be a representative sample of the first object in response to an average value of the first similarities of the first similarity vector being the maximum value among average values of N similarities respectively corresponding to the N similarity vectors.Type: ApplicationFiled: October 13, 2021Publication date: April 28, 2022Applicant: Coretronic CorporationInventors: Yi-Fan Liou, Hsin-Ya Liang, Kai-Cheng Hu
-
Publication number: 20220129695Abstract: A computer-implemented system and method learn an optimized interacting set of operational policies for implementation by multiple agents, where each agent is capable of learning an operational policy of the interacting set of operational policies. The system includes a first framework sub-system and a second framework sub-system. The first framework sub-system is configured modify one or both of reward functions and transition functions of a stochastic game undertaken by a plurality of agents in a simulated environment of the second framework sub-system; and update the reward and/or the transition functions based on feedback from the second framework sub-system. The system may generate policies that are capable of coping with deviations in the domains in which they are deployed and may perform alterations to the environment so as to induce optimal system outcomes.Type: ApplicationFiled: January 6, 2022Publication date: April 28, 2022Inventors: David MGUNI, Tian ZHENG, Yaodong YANG
-
Publication number: 20220129696Abstract: In various examples, the present disclosure relates to using temporal filters for automated real-time classification. The technology described herein improves the performance of a multiclass classifier that may be used to classify a temporal sequence of input signals—such as input signals representative of video frames. A performance improvement may be achieved, at least in part, by applying a temporal filter to an output of the multiclass classifier. For example, the temporal filter may leverage classifications associated with preceding input signals to improve the final classification given to a subsequent signal. In some embodiments, the temporal filter may also use data from a confusion matrix to correct for the probable occurrence of certain types of classification errors. The temporal filter may be a linear filter, a nonlinear filter, an adaptive filter, and/or a statistical filter.Type: ApplicationFiled: January 7, 2022Publication date: April 28, 2022Inventors: Sakthivel Sivaraman, Shagan Sah, Niranjan Avadhanam
-
Publication number: 20220129697Abstract: Training a robust machine learning model by mapping an input data set to a first feature space, applying a transformation to the first feature space, yielding a second feature space, and training a dense model using the first feature space, and the second feature space.Type: ApplicationFiled: October 28, 2020Publication date: April 28, 2022Inventor: Amod Jog
-
Publication number: 20220129698Abstract: Systems, apparatus, and methods are disclosed for foreline diagnostics and control. A foreline coupled to a chamber exhaust is instrumented with one or more sensors, in some embodiments placed between the chamber exhaust and an abatement system. The one or more sensors are positioned to measure pressure in the foreline as an indicator of conductance. The sensors are coupled to a trained machine learning model configured to provide a signal when the foreline needs a cleaning cycle or when preventive maintenance should be performed. In some embodiments, the trained machine learning predicts when cleaning or preventive maintenance will be needed.Type: ApplicationFiled: October 27, 2020Publication date: April 28, 2022Inventors: Ala MORADIAN, Martin A. HILKENE, Zuoming ZHU, Errol Antonio C. SANCHEZ, Bindusagar MARATH SANKARATHODI, Patricia M. LIU, Surendra Singh SRIVASTAVA
-
Publication number: 20220129699Abstract: A computer-implemented unsupervised learning method of training a video feature extractor. The video feature extractor is configured to extract a feature representation from a video sequence. The method uses training data representing multiple training video sequences. From a training video sequence of the multiple training video sequences, a current subsequence; a preceding subsequence preceding the current subsequence; and a succeeding subsequence succeeding the current subsequence are selected. The video feature extractor is applied to the current subsequence to extract a current feature representation of the current subsequence. A training signal is derived from a joint predictability of the preceding and succeeding subsequences given the current feature representation. The parameters of the video feature extractor are updated based on the training signal.Type: ApplicationFiled: September 28, 2021Publication date: April 28, 2022Inventors: Mehdi Noroozi, Nadine Behrmann
-
Publication number: 20220129700Abstract: A computer-implemented method, medium, and system are disclosed. One example method includes determining multiple model bases by multiple service parties. A respective local service model is constructed by each service party. Respective local training samples are processed by each service party using the respective local service model to determine respective gradient data corresponding to each model basis. The respective gradient data is sent to a server. In response to determining that the first model basis satisfies a gradient update condition, corresponding gradient data of the first model basis received from each service party are combined to obtain global gradient data corresponding to the first model basis. The global gradient data is sent to each service party. Reference parameters in local model basis corresponding to the first model basis are updated by each service party using the global gradient data to train the respective local service model.Type: ApplicationFiled: October 27, 2021Publication date: April 28, 2022Applicant: ALIPAY (HANGZHOU) INFORMATION TECHNOLOGY CO., LTD.Inventors: Yilun Lin, Hongjun Yin, Jinming Cui, Chaochao Chen, Li Wang, Jun Zhou
-
Publication number: 20220129701Abstract: The invention relates to a system for detecting objects in a digital image. The system comprises a neural network which is configured to generate candidate windows indicating object locations, and to generate for each candidate window a score representing the confidence of detection. Generating the scores comprises: generating a latent representation for each candidate window, updating the latent representation of each candidate window based on the latent representation of neighboring candidate windows, and generating the score for each candidate window based on its updated latent representation The invention further relates to a system for rescoring object detections in a digital image and to methods of detecting objects and rescoring objects.Type: ApplicationFiled: December 27, 2021Publication date: April 28, 2022Applicants: TOYOTA MOTOR EUROPE, MAX-PLANCK-GESELLSCHAFT ZUR FORDERUNG DER WISSENSCHAFTEN E.V.Inventors: Daniel OLMEDA REINO, Bernt Schiele, Jan Hendrik Hosang, Rodrigo Benenson
-
Publication number: 20220129702Abstract: An image searching apparatus includes: a processor; and a memory, wherein the processor is configured to attach, to an image with a first correct label attached thereto, a second correct label, the first correct label being a correct label attached to each image included in an image dataset for training for use in supervised training, the second correct label being a correct label based on a degree of similarity from a predetermined standpoint; execute main training processing to train a classifier by using the images and one of the first correct label and the second correct label, fine-tune a training state of the classifier; trained by the main training processing, by using the images and the other one of the first correct label and the second correct label; and search, by using the classifier that is fine-tuned, for images similar to a query image.Type: ApplicationFiled: January 6, 2022Publication date: April 28, 2022Applicant: CASIO COMPUTER CO., LTD.Inventor: Yoshihiro TESHIMA
-
Publication number: 20220129703Abstract: An embodiment of the present disclosure provides an artificial intelligence apparatus for generating training data including a memory configured to store an artificial intelligence model, an input interface including a microphone or a camera, and a processor configured to receive, via the input interface, input data, generate an inference result corresponding to the input data by using the artificial intelligence model, receive feedback corresponding to the inference result, determine suitability of the input data and the feedback for updating the artificial intelligence model, and generate training data based on the input data and the feedback if the input data and the feedback are determined as data suitable for updating of the artificial intelligence model.Type: ApplicationFiled: January 11, 2022Publication date: April 28, 2022Applicant: LG ELECTRONICS INC.Inventor: Jongwoo HAN
-
Publication number: 20220129704Abstract: A computing device includes: an inference circuit that calculates a recognition result of a recognition target and reliability of the recognition result using sensor data from a sensor group that detects the recognition target and a first classifier that classifies the recognition target; and a classification circuit that classifies the sensor data into either an associated target with which the recognition result is associated or a non-associated target with which the recognition result is not associated, based on the reliability of the recognition result calculated by the inference circuit.Type: ApplicationFiled: October 21, 2019Publication date: April 28, 2022Applicant: Hitachi Astemo, Ltd.Inventor: Daichi MURATA
-
Publication number: 20220129705Abstract: An apparatus includes a modified image generator generating modified images by modifying each unlabeled image, a pre-trainer to generate a feature vector for each modified image by using an artificial neural network-based encoder and train the encoder based on the feature vector for each modified image, a pseudo-label generator to generate a feature vector for each unlabeled training image, cluster the training images based on the feature vector for each training image, and generate a pseudo-label for at least one training image among the training images based on the clustering result, and a further trainer to generate a predicted label by using the trained encoder and a classification model including a classifier to generate a predicted label for an image input to the trained encoder based on a feature vector, and train the classification model based on the pseudo-label and predicted label for the at least one training image.Type: ApplicationFiled: January 14, 2021Publication date: April 28, 2022Inventors: Byoung Jip KIM, Jin Ho CHOO, Yeong Dae KWON, Il Joo YOON, Du Won PARK