Patents by Inventor Quan KONG
Quan KONG has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12211256Abstract: The invention supports creation of models for recognizing attributes in an image with high accuracy. An image recognition support apparatus includes an image input unit configured to acquire an image, a pseudo label generation unit configured to recognize the acquired image based on a plurality of types of image recognition models and output recognition information, and generate pseudo labels indicating attributes of the acquired image based on the output recognition information, and a new label generation unit configured to generate new labels based on the generated pseudo labels.Type: GrantFiled: February 23, 2022Date of Patent: January 28, 2025Assignee: Hitachi, Ltd.Inventors: Soichiro Okazaki, Quan Kong, Tomoaki Yoshinaga
-
Publication number: 20250014219Abstract: According to the method of the present disclosure, a two-dimensional joint pose of a target object belonging to an articulated object is estimated from an image of the target object by using a model. A query pose is generated by removing a joint whose confidence score is lower than a threshold from the two-dimensional joint pose of the target object. A sample two-dimensional joint pose closest to the query pose is obtained from a database in which a plurality of sample two-dimensional joint poses is registered for each basic joint pose of a sample articulated object. The two-dimensional joint pose of the target object is corrected by replacing the joint whose confidence score is lower than the threshold with a corresponding joint of the sample two-dimensional joint pose closest to the query pose.Type: ApplicationFiled: July 4, 2024Publication date: January 9, 2025Inventors: Norimasa KOBORI, Jira JINDALERTUDOMDEE, Quan KONG
-
Publication number: 20240394435Abstract: A security system in a predetermined area is based on a cooperation of security guards and moving bodies. A time-dependent position of each moving body is predetermined. A security simulation system performs a simulation of the security system to search for an appropriate time-dependent position of each security guard. The security simulation system calculates a rushing time from when an abnormality is detected by a first moving body to when a first security guard arrives at a position of the first moving body, and calculates a security coverage area that can be monitored by the security guards and the moving bodies. An evaluation function increases as the rushing time becomes shorter and as the security coverage area becomes wider. The security simulation system determines the time-dependent position of each security guard such that a value of the evaluation function becomes a predetermined level or higher.Type: ApplicationFiled: May 15, 2024Publication date: November 28, 2024Inventors: Norimasa KOBORI, Yumi SATO, Bing XUE, Takashi HOMMA, Sho OTAKI, Quan KONG, Yohei OZAO
-
Publication number: 20240395046Abstract: A management system communicates with a moving body having a localization function. The management system acquires an image captured by a moving camera mounted on a moving body and information on a moving camera position which is a position of the moving body when the image is captured. The management system extracts an image of the target area captured by the moving camera as a target area image based on the moving camera position. The management system executes area management process for managing a target area based on the target image.Type: ApplicationFiled: April 19, 2024Publication date: November 28, 2024Inventors: Norimasa KOBORI, Yumi SATO, Bing XUE, Hitoshi KAMADA, Hsuan-Kung YANG, Takashi HOMMA, Quan KONG
-
Publication number: 20240386721Abstract: A model generation method for generating a video extraction model for extracting a matching interval in a video that matches contents of an input sentence is provided. In the model generation method, a base matching interval and a sub matching interval in a training video are extracted by inputting a base sentence and a sub sentence to the video extraction model. Next, a loss for each of ground truth interval, the base matching interval, and the sub matching interval is calculated by processing a learning task of reconstructing the base sentence based on a feature value of the training video corresponding to each of them. Then, a machine learning is performed such that the first loss related to the ground truth interval is smaller than the second loss related to the base matching interval, and the second loss is smaller than the loss related to the sub matching interval.Type: ApplicationFiled: April 15, 2024Publication date: November 21, 2024Inventors: Quan KONG, Hsuan-Kung YANG, Norimasa KOBORI, Lijin YANG
-
Publication number: 20240386605Abstract: A first world is one of a real world and a virtual world simulating the real world, and a second world is another of them. A first image is captured by a first camera in the first world, and a second image is captured by a second camera in the second world. A visual positioning system executes common processing that generates a scene graph representing a positional relationship between objects included in the image and extracts a feature amount of the scene graph. The visual positioning system performs matching between a first feature amount extracted by the common processing on the first image and a second feature amount extracted by the common processing on the second image, and then associates the first camera position in the first world and the second camera position in the second world with each other based on a result of the matching.Type: ApplicationFiled: March 19, 2024Publication date: November 21, 2024Inventors: Norimasa KOBORI, Quan KONG, Hsuan-Kung YANG
-
Publication number: 20240386608Abstract: A depth estimation apparatus executes a first process of calculating a calibration value for an estimated depth and a second process of calibrating the estimated depth based on the calibration value. The first process includes: specifying a plane area in the image in which a horizontal plane or vertical plane is reflected; setting a plurality of partial regions in the image; calculating a regression plane representing the horizontal plane or the vertical plane for each partial region; and calculating the calibration for each partial region by comparing an installation position of the camera with a position of the camera with respect to the regression plane. The second process includes calibrating the estimated depth for each partial region based on the calibration value corresponding to each partial region.Type: ApplicationFiled: March 21, 2024Publication date: November 21, 2024Inventors: Quan KONG, Mustafa ERDOGAN, Norimasa KOBORI
-
Publication number: 20240386581Abstract: A tracking system for moving body includes a graph. In the graph, a node representing a single camera and a node representing a common tracking ID assigned to a moving body reflected in image data acquired by a single camera are connected via an edge. In the graph, further, nodes representing respective single cameras are connected via at least one edge representing a relationship between the at least two single cameras if there is a relationship between the at least two single cameras. In the graph, furthermore, nodes representing the at least two common tracking IDs are connected via at least one edge representing that the at least two moving bodies reflected in each video data captured by the at least two single cameras are the same moving object if the nodes representing the at least two common tracking IDs are recognized to be the same moving object.Type: ApplicationFiled: March 21, 2024Publication date: November 21, 2024Inventors: Hitoshi KAMADA, Hsuan-Kung YANG, Norimasa KOBORI, Naphatthara PHLOYNGAM, Mustafa ERDOGAN, Rajat SAINI, Quan KONG
-
Publication number: 20240386697Abstract: A re-identification system temporarily performs a re-identification process for determining whether two moving objects shown in a plurality of videos are identical or not. Similarities between the two moving objects in the re-identification process are ranked in consideration of a direction of each moving object. The rank is highest when the two moving objects are identical and the two moving objects are same in the direction. The rank is lowest when the two moving objects are not identical and the two moving objects are different in the direction. A ranking rule is that the rank is higher as the similarity is higher. The re-identification system calculates a degree of consistency between the ranking result and the ranking rule. Then, the re-identification system finally determines whether the two moving objects are identical or not based on the degree of consistency in addition to the similarities.Type: ApplicationFiled: March 19, 2024Publication date: November 21, 2024Inventors: Quan KONG, Hsuan-Kung YANG, Norimasa KOBORI, Jira JINDALERTUDOMDEE
-
Publication number: 20240386056Abstract: Processing to generate a graph and processing to search for a tracking target by referring the graph are performed. In the processing to search for the tracking target, the feature quantity of the tracking target is extracted from the image of the tracking target. Also, a moving body having the feature quantity that is most similar to the tracking target feature quantity is specified from the moving body feature quantities extracted from at least two moving body images represented by at least two nodes constituting the graph. Then a tracking target graph including a node representing a tracking identification number assigned to the identified moving body and at least one node connected to the node representing the tracking identification number via at least one edge is specified.Type: ApplicationFiled: May 1, 2024Publication date: November 21, 2024Inventors: Naoya YOSHIMURA, Hitoshi KAMADA, Hsuan-Kung YANG, Norimasa KOBORI, Quan KONG
-
Publication number: 20230306489Abstract: A data analysis apparatus is provided with a graph data generation unit that generates, in chronological order, a plurality of items of graph data configured by combining a plurality of nodes representing attributes for each element and a plurality of edges representing relatedness between the plurality of nodes, a node feature vector extraction unit that extracts a node feature vector for each of the plurality of nodes, an edge feature vector extraction unit that extracts an edge feature vector for each of the plurality of edges, and a spatiotemporal feature vector calculation unit that calculates a spatiotemporal feature vector indicating a change in node feature vector by performing, on the plurality of items of graph data generated by the graph data generation unit, convolution processing for each of a space direction and a time direction on the basis of the node feature vector and the edge feature vector.Type: ApplicationFiled: August 18, 2021Publication date: September 28, 2023Inventors: Quan KONG, Tomoaki YOSHINAGA
-
Patent number: 11587301Abstract: Provided are: an amodal segmentation unit that generates a set of first amodal masks indicating a probability that a particular pixel belongs to a relevant object for each of objects, with respect to an input image in which a plurality of the objects partially overlap; an overlap segmentation unit that generates an overlap mask corresponding only to an overlap region where the plurality of objects overlap in the input image based on an aggregate mask obtained by combining the set of first amodal masks generated for each of the objects and a feature map generated based on the input image; and an amodal mask correction unit that generates and outputs a second amodal mask, which includes an annotation label indicating a category of each of the objects corresponding to a relevant pixel, for each of pixels in the input image using the overlap mask and the aggregate mask.Type: GrantFiled: October 15, 2020Date of Patent: February 21, 2023Assignee: HITACHI SOLUTIONS, LTD.Inventors: Ziwei Deng, Quan Kong, Naoto Akira, Tomokazu Murakami
-
Publication number: 20220398831Abstract: The invention supports creation of models for recognizing attributes in an image with high accuracy. An image recognition support apparatus includes an image input unit configured to acquire an image, a pseudo label generation unit configured to recognize the acquired image based on a plurality of types of image recognition models and output recognition information, and generate pseudo labels indicating attributes of the acquired image based on the output recognition information, and a new label generation unit configured to generate new labels based on the generated pseudo labels.Type: ApplicationFiled: February 23, 2022Publication date: December 15, 2022Applicant: HITACHI, LTD.Inventors: Soichiro OKAZAKI, Quan KONG, Tomoaki YOSHINAGA
-
Patent number: 11482001Abstract: In a current image processing environment, providing feedback for a user operation is slow, and a user cannot confirm the result of the operation in real time. Thus, the user interrupts the operation every time the user confirms the feedback, and a time period for performing image processing is long. An image processing device includes a guidance information generator. The guidance information generator acquires trajectory information to be used to segment an image object on an image input from a user, generates guidance information indicating a segmentation region desired by the user based on the trajectory information, and presents the segmentation region based on a user's intention in real-time. The guidance information generator provides a function of smoothly continuously segmenting an image object.Type: GrantFiled: February 24, 2020Date of Patent: October 25, 2022Assignee: HITACHI, LTD.Inventors: Ziwei Deng, Quan Kong, Naoto Akira, Tomokazu Murakami
-
Publication number: 20220227081Abstract: Flow barriers such as trenches (144) and/or walls (152) laterally surrounding an aperture (142) in a coating (140) on a transparent substrate (120) help control the flow of replication material (124) during the formation of a replicated optical element on the aperture (142).Type: ApplicationFiled: May 19, 2020Publication date: July 21, 2022Inventors: Tae Yong Ahn, Sai Mun Chan, Lorenzo Tonsa, Lili Chong, Woei Quan Kong, Chitra Nadimuthu, Kay Khine Aung, Herng Wei Pook, Uros Markovic
-
Publication number: 20210357629Abstract: A video processing apparatus that processes a video of a moving body captured by a camera is configured to sample frames output from the camera at a predetermined rate, calculate a direction of motion of the moving body based on a sequence of a plurality of the frames, and extract a feature amount of the video by performing convolution processing together on the plurality of the frames based on the calculated direction.Type: ApplicationFiled: May 12, 2021Publication date: November 18, 2021Applicant: HITACHI, LTD.Inventors: Quan KONG, Tomoaki YOSHINAGA, Tomokazu MURAKAMI
-
Publication number: 20210248408Abstract: Provided are: an amodal segmentation unit that generates a set of first amodal masks indicating a probability that a particular pixel belongs to a relevant object for each of objects, with respect to an input image in which a plurality of the objects partially overlap; an overlap segmentation unit that generates an overlap mask corresponding only to an overlap region where the plurality of objects overlap in the input image based on an aggregate mask obtained by combining the set of first amodal masks generated for each of the objects and a feature map generated based on the input image; and an amodal mask correction unit that generates and outputs a second amodal mask, which includes an annotation label indicating a category of each of the objects corresponding to a relevant pixel, for each of pixels in the input image using the overlap mask and the aggregate mask.Type: ApplicationFiled: October 15, 2020Publication date: August 12, 2021Applicant: HITACHI SOLUTIONS, LTD.Inventors: Ziwei DENG, Quan KONG, Naoto AKIRA, Tomokazu MURAKAMI
-
Patent number: 10977515Abstract: An image retrieving apparatus includes a pose estimating unit which recognizes pose information of a retrieval target including a plurality of feature points from an input image, a features extracting unit which extracts features from the pose information and the input image, an image database which accumulates the features in association with the input image, a query generating unit which generates a retrieval query from pose information specified by a user, and an image retrieving unit which retrieves images including similar poses according to the retrieval query from the image database.Type: GrantFiled: October 26, 2018Date of Patent: April 13, 2021Assignee: HITACHI, LTD.Inventors: Yuki Watanabe, Kenichi Morita, Tomokazu Murakami, Atsushi Hiroike, Quan Kong
-
Patent number: 10963736Abstract: Provided is technique of object recognition that can accurately recognize an object. An object recognition apparatus (i) generates property data that highlights a specific property based on target data, (ii) extracts a discrimination-use feature amount used for discrimination of each piece of the property data, (iii) calculates discrimination information used for discrimination of the property data, (iv) extracts a reliability feature amount used for estimation of reliability of the discrimination information calculated for each piece of the property data, (v) estimates the reliability of the discrimination information, (vi) generates synthesized information acquired by synthesizing the discrimination information calculated for each piece of the property data and the reliability calculated for each piece of the property data, and (vii) performs processing related to the object recognition.Type: GrantFiled: December 21, 2017Date of Patent: March 30, 2021Assignee: HITACHI, LTD.Inventors: Quan Kong, Yuki Watanabe, Naoto Akira, Daisuke Matsubara
-
Publication number: 20200311575Abstract: The online recognition apparatus includes a feature amount extraction unit that extracts a feature amount of input data, an identification result prediction unit that predicts an identification result based on the extracted feature amount, a prediction result evaluation unit that determines necessity of labeling from the predicted identification result unit, a correct answer assigning unit that assigns a correct answer to input data online from the determination result, a generator update unit that updates a parameter of a generator based on the input data with the correct answer, a pseudo-learning data generation unit that establishes a generator based on the parameter of the updated generator and generates pseudo-learning data, and an identifier update unit that online updates a parameter of an identifier prepared in advance based on the input data with the correct answer and the pseudo-learning data. The updated identifier is updated as a new identification result prediction unit.Type: ApplicationFiled: August 9, 2018Publication date: October 1, 2020Inventors: Quan KONG, Yuki WATANABE, Naoto AKIRA, Tomokazu MURAKAMI