Patents by Inventor Joonhwa Shin
Joonhwa Shin has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250200975Abstract: Apparatuses, systems, and techniques for multi-object tracking of partially occluded objects in a monitored environment are provided. A reference point of a first object in an environment is identified based on characteristics pertaining to the first object. A portion of the first object is occluded by a second object in the environment relative to a perspective of a camera component associated with a set of image frames depicting the first object and the second object. A set of coordinates of a multi-dimensional model for the first object is updated based on the identified reference point. The updated set of coordinates indicate a region of at least one of the set of image frames that include the occluded portion of the first object relative to the identified reference point. A location of the first object is tracked in the environment based on the updated set of coordinates of the multi-dimensional model.Type: ApplicationFiled: November 4, 2024Publication date: June 19, 2025Inventors: Joonhwa Shin, Fangyu Li, Hugo Maxence Verjus, Zheng Liu
-
Publication number: 20250200354Abstract: Approaches are disclosed that can automatically tune parameters in an application pipeline. An application pipeline and datasets with labels can be accepted as input. An application pipeline can include various modules and information such as their interconnections and a set of parameters to be tuned. The input data can be fed into a preprocessing module and then fed into a parameter search module, which can navigate through the parameter space and search for improved and/or optimal parameters. The search can progress to informed selections based on outcomes of previous evaluations. The parameters identified can be used by an execution module to execute the pipeline. The results produced can be evaluated by an evaluation module that condenses its findings into a single score, which is passed back to a parameter search module to inform the next round of parameter predictions.Type: ApplicationFiled: December 14, 2023Publication date: June 19, 2025Inventors: Joonhwa Shin, Naimul Hasan, Fangyu Li, Hugo Verjus, Zheng Liu
-
Publication number: 20250046080Abstract: Apparatuses, systems, and techniques for real-time persistent object tracking for intelligent video analytics systems are provided. A first object is tracked in an environment depicted by a first set of images. One or more predicted future states of the first object in the environment are obtained. A second object is detected in the environment depicted by a second set of images. A number of images of the second set of images exceeds a threshold number of images. A determination is made of whether a current state of the second object corresponds to at least one of the predicted future states of the first object. Responsive to a determination that a current state of the second object corresponds to at least one of the predicted future states of the first object, state data for the first object is updated based on the determined current state of the second object.Type: ApplicationFiled: October 21, 2024Publication date: February 6, 2025Inventors: Joonhwa Shin, Fangyu Li, Zheng Liu, Kaustubh Purandare
-
Patent number: 12125277Abstract: Apparatuses, systems, and techniques for real-time persistent object tracking for intelligent video analytics systems. A state of a first object included in an environment may be tracked based on a first set of images depicting the environment. The first set of images may be generated during a first time period. It may be determined that the first object is not detected in the environment depicted in a second set of images. The second set of images may be generated during a second time period that is subsequent to the first time period. One or more predicted future states of the first object may be obtained in view of the state of the first object in the environment depicted in the first set of images. A second object may be detected in the environment depicted in a third set of images generated during a third time period that is subsequent to the second time period.Type: GrantFiled: October 15, 2021Date of Patent: October 22, 2024Assignee: NVIDIA CorporationInventors: Joonhwa Shin, Fangyu Li, Zheng Liu, Kaustubh Purandare
-
Publication number: 20240303836Abstract: In various examples, image areas may be extracted from a batch of one or more images and may be scaled, in batch, to one or more template sizes. Where the image areas include search regions used for localization of objects, the scaled search regions may be loaded into Graphics Processing Unit (GPU) memory and processed in parallel for localization. Similarly, where image areas are used for filter updates, the scaled image areas may be loaded into GPU memory and processed in parallel for filter updates. The image areas may be batched from any number of images and/or from any number of single- and/or multi-object trackers. Further aspects of the disclosure provide approaches for associating locations using correlation response values, for learning correlation filters in object tracking based at least on focused windowing, and for learning correlation filters in object tracking based at least on occlusion maps.Type: ApplicationFiled: May 21, 2024Publication date: September 12, 2024Inventors: Joonhwa Shin, Zheng Liu, Kaustubh Purandare
-
Publication number: 20240303988Abstract: Apparatuses, systems, and techniques for dynamically composable object tracker configuration for intelligent video analytics systems. A state of one or more objects included in an object is tracked using an object tracking application that implements an object tracker of a first type based on images depicting the environment. A request is received to perform tracking using a second object tracker type that is different from the first object tracker type. The object tracking application is configured to implement an object tracker of the second object tracker type in accordance with the request. The state of the objects in the environment is tracked using the object tracking application that implements the object tracker of the second object tracker type based on the images depicting the environment.Type: ApplicationFiled: May 10, 2024Publication date: September 12, 2024Inventors: Joonhwa Shin, Fangyu Li, Zheng Liu, Kaustubh Purandare
-
Publication number: 20240202936Abstract: A first visual appearance descriptor associated with a first object in an environment is obtained based on a first set of images of a first time period. The first object is subsequently absent from the environment in a second set of images of a second time period. A second visual appearance descriptor associated with a second object is obtained based on a third set of images, of a third time period subsequent to the second time period. A compound similarity metric between the first and second objects is obtained in view of visual appearance similarity and motion similarity metrics. The visual appearance similarity metric corresponds to a degree of similarity between the first and second visual appearance descriptors. An identifier associated with the second object is updated to correspond to an identifier associated with the first object in response to determining that the compound similarity metric meets a threshold value.Type: ApplicationFiled: December 15, 2023Publication date: June 20, 2024Inventors: Joonhwa Shin, Fangyu Li, Hugo Maxence Verjus, Zheng Liu, Kaustubh Purandare
-
Patent number: 11995895Abstract: In various examples, image areas may be extracted from a batch of one or more images and may be scaled, in batch, to one or more template sizes. Where the image areas include search regions used for localization of objects, the scaled search regions may be loaded into Graphics Processing Unit (GPU) memory and processed in parallel for localization. Similarly, where image areas are used for filter updates, the scaled image areas may be loaded into GPU memory and processed in parallel for filter updates. The image areas may be batched from any number of images and/or from any number of single- and/or multi-object trackers. Further aspects of the disclosure provide approaches for associating locations using correlation response values, for learning correlation filters in object tracking based at least on focused windowing, and for learning correlation filters in object tracking based at least on occlusion maps.Type: GrantFiled: May 29, 2020Date of Patent: May 28, 2024Assignee: NVIDIA CorporationInventors: Joonhwa Shin, Zheng Liu, Kaustubh Purandare
-
Patent number: 11983928Abstract: Apparatuses, systems, and techniques for managing lost objects in an intelligent video analytics system. A first set of application modules is executed for an object tracking application configured to track, based on images depicting an environment, a state of objects included in the environment. The first set of application modules is associated with a first object tracker type. A request is received to configure the object tracking application to execute a second set of application modules associated with a second object tracker type. The second set of application modules includes one or more application modules that are different from application modules of the first set of application modules. The object tracking application is configured to execute the second set of application modules in accordance with the request.Type: GrantFiled: October 15, 2021Date of Patent: May 14, 2024Assignee: Nvidia CorporationInventors: Joonhwa Shin, Fangyu Li, Zheng Liu, Kaustubh Purandare
-
Patent number: 11689750Abstract: Embodiments of the present disclosure relate to workload-based dynamic throttling of video processing functions. Systems and methods are disclosed that dynamically throttle video processing and/or streaming based on a workload. Live video is captured from one or more sources (e.g., cameras) and stored. The video is then provided to a video processing engine and a video streaming engine. The video processing engine may perform one or more operations such as object detection, object tracking, and object classification to produce characterization data (e.g., bounding boxes, object trajectories, alerts, object labels, object counts, boundary crossings, intersection highlighting, etc.). System resource usage and performance of the video processing and streaming are monitored to produce workload data (e.g., metrics). Based on the policies and the workload data, the video streaming and/or processing is dynamically reconfigured by adjusting parameters provided to the video streaming and processing engines.Type: GrantFiled: October 7, 2021Date of Patent: June 27, 2023Assignee: NVIDIA CorporationInventors: Bhanu Nagendra Pisupati, Rahul Maruti Bhagwat, Rohit Ramesh Vaswani, David Ung, Joonhwa Shin
-
Publication number: 20230109778Abstract: Embodiments of the present disclosure relate to workload-based dynamic throttling of video processing functions. Systems and methods are disclosed that dynamically throttle video processing and/or streaming based on a workload. Live video is captured from one or more sources (e.g., cameras) and stored. The video is then provided to a video processing engine and a video streaming engine. The video processing engine may perform one or more operations such as object detection, object tracking, and object classification to produce characterization data (e.g., bounding boxes, object trajectories, alerts, object labels, object counts, boundary crossings, intersection highlighting, etc.). System resource usage and performance of the video processing and streaming are monitored to produce workload data (e.g., metrics). Based on the policies and the workload data, the video streaming and/or processing is dynamically reconfigured by adjusting parameters provided to the video streaming and processing engines.Type: ApplicationFiled: October 7, 2021Publication date: April 13, 2023Inventors: Bhanu Nagendra Pisupati, Rahul Maruti Bhagwat, Rohit Ramesh Vaswani, David Ung, Joonhwa Shin
-
Patent number: 11615430Abstract: The method and system evaluates the effectiveness of a display location within a store based on a behavioral response analysis of shoppers in the vicinity. The effectiveness of a display location is measured by tracking shoppers in-store, extracting and processing shopper attributes, and extracting metrics based on the processed attributes. The metrics of one location is compared to the metrics of another location to determine overall effectiveness.Type: GrantFiled: February 5, 2014Date of Patent: March 28, 2023Assignee: VideoMining CorporationInventors: Rajeev Sharma, Joonhwa Shin, Namsoon Jung
-
Patent number: 11354683Abstract: A method and system for creating an anonymous shopper panel based on multi-modal sensor data fusion. The anonymous shopper panel can serve as the same traditional shopper panel who reports their household information, such as household size, income level, demographics, etc., and their purchase history, yet without any voluntary participation. A configuration of vision sensors and mobile access points can be used to detect and track shoppers as they travel a retail environment. Fusion of those modalities can be used to form a trajectory. The trajectory data can then be associated with Point of Sale data to form a full set of shopper behavior data. Shopper behavior data for a particular visit can then be compared to data from previous shoppers' visits to determine if the shopper is a revisiting shopper. The shopper's data can then be aggregated for multiple visits to the retail location. The aggregated shopper data can be filtered using application-specific criteria to create an anonymous shopper panel.Type: GrantFiled: December 30, 2015Date of Patent: June 7, 2022Assignee: VideoMining CorporationInventors: Joonhwa Shin, Rajeev Sharma, Youngrock R Yoon, Donghun Kim
-
Publication number: 20200380274Abstract: In various examples, image areas may be extracted from a batch of one or more images and may be scaled, in batch, to one or more template sizes. Where the image areas include search regions used for localization of objects, the scaled search regions may be loaded into Graphics Processing Unit (GPU) memory and processed in parallel for localization. Similarly, where image areas are used for filter updates, the scaled image areas may be loaded into GPU memory and processed in parallel for filter updates. The image areas may be batched from any number of images and/or from any number of single- and/or multi-object trackers. Further aspects of the disclosure provide approaches for associating locations using correlation response values, for learning correlation filters in object tracking based at least on focused windowing, and for learning correlation filters in object tracking based at least on occlusion maps.Type: ApplicationFiled: May 29, 2020Publication date: December 3, 2020Inventors: Joonhwa Shin, Zheng Liu, Kaustubh Purandare
-
Patent number: 10614436Abstract: A method and system for associating a mobile device carried by a shopper with Point-of-Sale (PoS) data from that shopper's transaction at a retail location. Specifically, the association can utilize face data that was associated with the mobile device to match face data that was generated by association with PoS data. The association with a mobile device can also include the calculation of a mobile trajectory to enable the generation of a shopper profile including shopper behavior. The mobile to PoS association can include tracking of the mobile device throughout a retail location to form a trajectory. The association of a face to a mobile device can utilize the device's MAC address and repeat visit analysis to determine a match. The association of a face to PoS data can utilize the aligning of event time series based on a dynamic time disparity to match checkout events with PoS events.Type: GrantFiled: August 25, 2016Date of Patent: April 7, 2020Assignee: VideoMining CorporationInventors: Rajeev Sharma, Joonhwa Shin, Youngrock Yoon, Donghun Kim
-
Patent number: 10262331Abstract: A method and system for cross-channel shopper behavior analysis. Tracking individual shopper behavior across many retail locations (i.e., across multiple channels) can be extremely valuable for product manufacturers and retails. A configuration of vision sensors and mobile access points can be used to detect and track shoppers as they travel a retail environment, forming a trajectory. The trajectory data can then be associated with Point of Sale data to form a full set of shopper behavior data. The shopper's data can then be aggregated for multiple visits to many retail locations in a geographic area. The aggregated shopper data can be filtered using application-specific criteria for further analysis.Type: GrantFiled: January 29, 2016Date of Patent: April 16, 2019Assignee: VideoMining CorporationInventors: Rajeev Sharma, Joonhwa Shin, Youngrock R Yoon, Donghun Kim
-
Patent number: 10217120Abstract: The present invention provides a comprehensive method for automatically and unobtrusively analyzing the in-store behavior of people visiting a physical space using a multi-modal fusion based on multiple types of sensors. The types of sensors employed may include cameras for capturing a plurality of images and mobile signal sensors for capturing a plurality of Wi-Fi signals. The present invention integrates the plurality of input sensor measurements to reliably and persistently track the people's physical attributes and detect the people's interactions with retail elements. The physical and contextual attributes collected from the processed shopper tracks includes the motion dynamics changes triggered by an implicit and explicit interaction to a retail element, comprising the behavior information for the trip of the people.Type: GrantFiled: April 21, 2015Date of Patent: February 26, 2019Assignee: VideoMining CorporationInventors: Joonhwa Shin, Rajeev Sharma
-
Patent number: 10198625Abstract: A method and system for associating a physically identifiable feature of a person with the unique identifier of a mobile device carried by the person as detected across repeat visits to a physical location or multiple locations. Specifically, associating the face of a repeat visitor with a unique identifier of a mobile device, such as the device's MAC address, by means of a repeat visit analysis without any explicit or voluntary participation of the person, for example, in a form of providing their information by participation in a survey. For each detection of a particular MAC address associated with a mobile device at a particular physical location, a set of candidate faces can be captured at that physical location. Revisiting faces can be identified and grouped together, and a probability that each group of faces is associated with a particular MAC address can then be calculated.Type: GrantFiled: March 26, 2016Date of Patent: February 5, 2019Assignee: VideoMining CorporationInventors: Joonhwa Shin, Rajeev Sharma, Donghun Kim
-
Patent number: 10083358Abstract: A method and system for associating an image of a face of at least a person with Point-of-Sale (PoS) data by aligning a first event time series with a second event time series based on a dynamic time disparity. An exemplary embodiment can generate an event time series containing facial recognition data for a person or persons during the PoS transaction process. These data can form a vision-based checkout event time series. An embodiment can also collect PoS transaction data from the retail checkout system, using timestamp information to create a second event time series. As there may be a time disparity between the time series, they can then be aligned in order to match events from one time series to the other. Faces identified in checkout events in the first time series can be registered to PoS events and the results stored in a database.Type: GrantFiled: July 26, 2016Date of Patent: September 25, 2018Assignee: VideoMining CorporationInventors: Joonhwa Shin, Rajeev Sharma, Youngrock Yoon, Donghun Kim
-
Patent number: 9317785Abstract: The present invention is a system and method for performing ethnicity classification based on the facial images of people, using multi-category decomposition architecture of classifiers, which include a set of predefined auxiliary classifiers that are specialized to auxiliary features of the facial images. In the multi-category decomposition architecture, which is a hybrid multi-classifier architecture specialized to ethnicity classification, the task of learning the concept of ethnicity against significant within-class variations, is handled by decomposing the set of facial images into auxiliary demographics classes; the ethnicity classification is performed by an array of classifiers where each classifier, called an auxiliary class machine, is specialized to the given auxiliary class. The facial image data is annotated to assign the age and gender labels as well as the ethnicity labels.Type: GrantFiled: April 21, 2014Date of Patent: April 19, 2016Assignee: Video Mining CorporationInventors: Hankyu Moon, Rajeev Sharma, Namsoon Jung, Joonhwa Shin