Patents by Inventor Te-Won Lee
Te-Won Lee has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240419830Abstract: Network infrastructure for user-specific generative intelligence. Providing user-specific context to a generically trained LLM introduces a variety of complications (privacy, resource utilization, training costs, etc.). Various aspects of the present disclosure provide novel user-specific data structures, privacy and access control, layers of data, and session management, within a network infrastructure for generative intelligence. For example, user-specific embedding vectors may be used to provide user context to a generically trained foundation model. In some variants, edge devices capture multiple modalities of user context (images, audio; not just text). Privacy and access control mechanisms also allow a user to control information that is captured and sent to the foundation model. Session management further decouples a user's conversational state from the foundation model's session state. These concepts and others may be used to emulate e.g.Type: ApplicationFiled: June 17, 2024Publication date: December 19, 2024Applicant: SoftEye, Inc.Inventors: Edwin Chongwoo Park, Te-Won Lee, DoYoung Lee, Aravind Natarajan
-
Publication number: 20240419727Abstract: Network infrastructure for user-specific generative intelligence. Providing user-specific context to a generically trained LLM introduces a variety of complications (privacy, resource utilization, training costs, etc.). Various aspects of the present disclosure provide novel user-specific data structures, privacy and access control, layers of data, and session management, within a network infrastructure for generative intelligence. For example, user-specific embedding vectors may be used to provide user context to a generically trained foundation model. In some variants, edge devices capture multiple modalities of user context (images, audio; not just text). Privacy and access control mechanisms also allow a user to control information that is captured and sent to the foundation model. Session management further decouples a user's conversational state from the foundation model's session state. These concepts and others may be used to emulate e.g.Type: ApplicationFiled: June 17, 2024Publication date: December 19, 2024Applicant: SoftEye, Inc.Inventors: Edwin Chongwoo Park, Te-Won Lee, DoYoung Lee, Aravind Natarajan
-
Publication number: 20240419656Abstract: Network infrastructure for user-specific generative intelligence. Providing user-specific context to a generically trained LLM introduces a variety of complications (privacy, resource utilization, training costs, etc.). Various aspects of the present disclosure provide novel user-specific data structures, privacy and access control, layers of data, and session management, within a network infrastructure for generative intelligence. For example, user-specific embedding vectors may be used to provide user context to a generically trained foundation model. In some variants, edge devices capture multiple modalities of user context (images, audio; not just text). Privacy and access control mechanisms also allow a user to control information that is captured and sent to the foundation model. Session management further decouples a user's conversational state from the foundation model's session state. These concepts and others may be used to emulate e.g.Type: ApplicationFiled: June 17, 2024Publication date: December 19, 2024Applicant: SoftEye, Inc.Inventors: Edwin Chongwoo Park, Te-Won Lee, DoYoung Lee, Aravind Natarajan
-
Publication number: 20240420491Abstract: Network infrastructure for user-specific generative intelligence. Providing user-specific context to a generically trained LLM introduces a variety of complications (privacy, resource utilization, training costs, etc.). Various aspects of the present disclosure provide novel user-specific data structures, privacy and access control, layers of data, and session management, within a network infrastructure for generative intelligence. For example, user-specific embedding vectors may be used to provide user context to a generically trained foundation model. In some variants, edge devices capture multiple modalities of user context (images, audio; not just text). Privacy and access control mechanisms also allow a user to control information that is captured and sent to the foundation model. Session management further decouples a user's conversational state from the foundation model's session state. These concepts and others may be used to emulate e.g.Type: ApplicationFiled: June 17, 2024Publication date: December 19, 2024Applicant: SoftEye, Inc.Inventors: Edwin Chongwoo Park, Te-Won Lee, DoYoung Lee, Aravind Natarajan
-
Publication number: 20240419701Abstract: Network infrastructure for user-specific generative intelligence. Providing user-specific context to a generically trained LLM introduces a variety of complications (privacy, resource utilization, training costs, etc.). Various aspects of the present disclosure provide novel user-specific data structures, privacy and access control, layers of data, and session management, within a network infrastructure for generative intelligence. For example, user-specific embedding vectors may be used to provide user context to a generically trained foundation model. In some variants, edge devices capture multiple modalities of user context (images, audio; not just text). Privacy and access control mechanisms also allow a user to control information that is captured and sent to the foundation model. Session management further decouples a user's conversational state from the foundation model's session state. These concepts and others may be used to emulate e.g.Type: ApplicationFiled: June 17, 2024Publication date: December 19, 2024Applicant: SoftEye, Inc.Inventors: Edwin Chongwoo Park, Te-Won Lee, DoYoung Lee, Aravind Natarajan
-
Publication number: 20240311966Abstract: Systems, apparatus, and methods for augmenting vision with region-of-interest based processing. In one specific example, smart glasses may use an eye-tracking camera to monitor the user's gaze and determine the user's gaze point. When triggered, the camera assembly captures a high-resolution image. The high-resolution image may be cropped to a much smaller region-of-interest (ROI) image based on computer-vision analysis of the user's gaze point. For example, if the smart glasses detect a human face at the gaze point, then the ROI is cropped to the human face. In this manner, the smart glasses may leverage specific capabilities of the smart glasses to augment the user experience; for example, telephoto lenses provide long distance vision, or computer-assisted search may direct the user to interesting activity. Other aspects may include e.g., external database assisted operation and/or ongoing cataloging throughout the day.Type: ApplicationFiled: March 21, 2024Publication date: September 19, 2024Applicant: SoftEye, Inc.Inventors: Edwin Chongwoo Park, Te-Won Lee
-
Publication number: 20240019939Abstract: Systems, apparatus, and methods for a gesture-based augmented reality and/or extended reality (AR/XR) user interface. Conventional image processing scales quadratically based on image resolution. Processing complexity directly corresponds to memory size, power consumption, and heat dissipation. As a result, existing smart glasses solutions have short run-times (<1 hr) and may have battery weight and heat dissipation issues that are uncomfortable for continuous wear. The disclosed solution provides a system and method for low-power image processing via the use of scalable processing. In one specific implementation, gesture detection is divided into multiple stages. Each stage conditionally enables subsequent stages for more complex processing. By scaling processing complexity at each stage, high complexity processing can be performed on an “as-needed” basis.Type: ApplicationFiled: August 7, 2023Publication date: January 18, 2024Applicant: SoftEye, Inc.Inventors: Te-Won Lee, Edwin Chongwoo Park
-
Publication number: 20240019940Abstract: Systems, apparatus, and methods for a gesture-based augmented reality and/or extended reality (AR/XR) user interface. Conventional image processing scales quadratically based on image resolution. Processing complexity directly corresponds to memory size, power consumption, and heat dissipation. As a result, existing smart glasses solutions have short run-times (<1 hr) and may have battery weight and heat dissipation issues that are uncomfortable for continuous wear. The disclosed solution provides a system and method for low-power image processing via the use of scalable processing. In one specific implementation, gesture detection is divided into multiple stages. Each stage conditionally enables subsequent stages for more complex processing. By scaling processing complexity at each stage, high complexity processing can be performed on an “as-needed” basis.Type: ApplicationFiled: August 7, 2023Publication date: January 18, 2024Applicant: SoftEye, Inc.Inventors: Te-Won Lee, Edwin Chongwoo Park
-
Patent number: 11847266Abstract: Systems, apparatus, and methods for a gesture-based augmented reality and/or extended reality (AR/XR) user interface. Conventional image processing scales quadratically based on image resolution. Processing complexity directly corresponds to memory size, power consumption, and heat dissipation. As a result, existing smart glasses solutions have short run-times (<1 hr) and may have battery weight and heat dissipation issues that are uncomfortable for continuous wear. The disclosed solution provides a system and method for low-power image processing via the use of scalable processing. In one specific implementation, gesture detection is divided into multiple stages. Each stage conditionally enables subsequent stages for more complex processing. By scaling processing complexity at each stage, high complexity processing can be performed on an “as-needed” basis.Type: GrantFiled: December 2, 2022Date of Patent: December 19, 2023Assignee: SoftEye, Inc.Inventors: Te-Won Lee, Edwin Chongwoo Park
-
Publication number: 20230368326Abstract: Methods and apparatus for scalable processing. Conventional image sensors read image data in a sequential row-by-row manner. However, image data may be more efficiently processed at different scales. For example, computer vision processing at a first scale may be used to determine whether subsequent processing with more resolution is helpful. Various embodiments of the present disclosure readout image data according to different scales; scaled readouts may be processed using scale specific computer vision algorithms to determine next steps. In addition to scaled readouts of image data, some variants may also provide commonly used data and/or implement pre-processing steps.Type: ApplicationFiled: May 11, 2023Publication date: November 16, 2023Applicant: SoftEye, Inc.Inventors: Te-Won Lee, Edwin Chongwoo Park
-
Publication number: 20230370752Abstract: Methods and apparatus for scalable processing. Conventional image sensors read image data in a sequential row-by-row manner. However, image data may be more efficiently processed at different scales. For example, computer vision processing at a first scale may be used to determine whether subsequent processing with more resolution is helpful. Various embodiments of the present disclosure readout image data according to different scales; scaled readouts may be processed using scale specific computer vision algorithms to determine next steps. In addition to scaled readouts of image data, some variants may also provide commonly used data and/or implement pre-processing steps.Type: ApplicationFiled: May 11, 2023Publication date: November 16, 2023Applicant: SoftEye, Inc.Inventors: Te-Won Lee, Edwin Chongwoo Park
-
Publication number: 20230368328Abstract: Methods and apparatus for scalable processing. Conventional image sensors read image data in a sequential row-by-row manner. However, image data may be more efficiently processed at different scales. For example, computer vision processing at a first scale may be used to determine whether subsequent processing with more resolution is helpful. Various embodiments of the present disclosure readout image data according to different scales; scaled readouts may be processed using scale specific computer vision algorithms to determine next steps. In addition to scaled readouts of image data, some variants may also provide commonly used data and/or implement pre-processing steps.Type: ApplicationFiled: May 11, 2023Publication date: November 16, 2023Applicant: SoftEye, Inc.Inventors: Te-Won Lee, Edwin Chongwoo Park
-
Publication number: 20230305632Abstract: Systems, apparatus, and methods for a gesture-based augmented reality and/or extended reality (AR/XR) user interface. Conventional image processing scales quadratically based on image resolution. Processing complexity directly corresponds to memory size, power consumption, and heat dissipation. As a result, existing smart glasses solutions have short run-times (<ihr) and may have battery weight and heat dissipation issues that are uncomfortable for continuous wear. The disclosed solution provides a system and method for low-power image processing via the use of scalable processing. In one specific implementation, gesture detection is divided into multiple stages. Each stage conditionally enables subsequent stages for more complex processing. By scaling processing complexity at each stage, high complexity processing can be performed on an “as-needed” basis.Type: ApplicationFiled: December 2, 2022Publication date: September 28, 2023Applicant: SoftEye, Inc.Inventors: Te-Won Lee, Edwin Chongwoo Park
-
Publication number: 20230176658Abstract: Systems, apparatus, and methods for a gesture-based augmented reality and/or extended reality (AR/XR) user interface. Conventional image processing scales quadratically based on image resolution. Processing complexity directly corresponds to memory size, power consumption, and heat dissipation. As a result, existing smart glasses solutions have short run-times (<1 hr) and may have battery weight and heat dissipation issues that are uncomfortable for continuous wear. The disclosed solution provides a system and method for low-power image processing via the use of scalable processing. In one specific implementation, gesture detection is divided into multiple stages. Each stage conditionally enables subsequent stages for more complex processing. By scaling processing complexity at each stage, high complexity processing can be performed on an “as-needed” basis.Type: ApplicationFiled: December 2, 2022Publication date: June 8, 2023Applicant: SoftEye, Inc.Inventors: Te-Won Lee, Edwin Chongwoo Park
-
Publication number: 20230176659Abstract: Systems, apparatus, and methods for a gesture-based augmented reality and/or extended reality (AR/XR) user interface. Conventional image processing scales quadratically based on image resolution. Processing complexity directly corresponds to memory size, power consumption, and heat dissipation. As a result, existing smart glasses solutions have short run-times (<1 hr) and may have battery weight and heat dissipation issues that are uncomfortable for continuous wear. The disclosed solution provides a system and method for low-power image processing via the use of scalable processing. In one specific implementation, gesture detection is divided into multiple stages. Each stage conditionally enables subsequent stages for more complex processing. By scaling processing complexity at each stage, high complexity processing can be performed on an “as-needed” basis.Type: ApplicationFiled: December 2, 2022Publication date: June 8, 2023Applicant: SoftEye, Inc.Inventors: Te-Won Lee, Edwin Chongwoo Park
-
Publication number: 20210264947Abstract: A device includes a memory configured to store a captured audio input signal and one or more processors configured to process the captured audio input signal to determine auditory context information within the captured audio input signal. The one or more processors are configured to determine an audio quality enhancement level to be applied to the captured audio input signal based on the determined auditory context information, and perform audio quality enhancement on the captured audio input signal based on the determined audio quality enhancement level, wherein the audio quality enhancement level is dynamically adjusted during the storing of the captured audio input signal according to the determined auditory context information.Type: ApplicationFiled: May 11, 2021Publication date: August 26, 2021Inventors: Te-won LEE, Khaled Helmi EL-MALEH, Heejong YOO, Jongwon SHIN
-
Patent number: 9916431Abstract: A method, performed by an electronic device, for verifying a user to allow access to the electronic device is disclosed. In this method, sensor data may be received from a plurality of sensors including at least an image sensor and a sound sensor. Context information of the electronic device may be determined based on the sensor data and at least one verification unit may be selected from a plurality of verification units based on the context information. Based on the sensor data from at least one of the image sensor or the sound sensor, the at least one selected verification unit may calculate at least one verification value. The method may determine whether to allow the user to access the electronic device based on the at least one verification value and the context information.Type: GrantFiled: January 15, 2015Date of Patent: March 13, 2018Assignee: QUALCOMM IncorporatedInventors: Kyu Woong Hwang, Seungwoo Yoo, Duck-Hoon Kim, Sungwoong Kim, Te-Won Lee
-
Patent number: 9602728Abstract: A method, which is performed by an electronic device, for adjusting at least one image capturing parameter in a preview mode is disclosed. The method may include capturing a preview image of a scene including at least one text object based on a set of image capturing parameters. The method may also identify a plurality of text regions in the preview image. From the plurality of text regions, a target focus region may be selected. Based on the target focus region, the at least one image capturing parameter may be adjusted.Type: GrantFiled: June 9, 2014Date of Patent: March 21, 2017Assignee: QUALCOMM IncorporatedInventors: Hyun-Mook Cho, Duck-Hoon Kim, Te-Won Lee, Ananthapadmanabhan Kandhadai, Pengjun Huang
-
Patent number: 9563265Abstract: A method for responding in an augmented reality (AR) application of a mobile device to an external sound is disclosed. The mobile device detects a target. A virtual object is initiated in the AR application. Further, the external sound is received, by at least one sound sensor of the mobile device, from a sound source. Geometric information between the sound source and the target is determined, and at least one response for the virtual object to perform in the AR application is generated based on the geometric information.Type: GrantFiled: August 15, 2012Date of Patent: February 7, 2017Assignee: QUALCOMM IncorporatedInventors: Kisun You, Taesu Kim, Kyuwoong Hwang, Minho Jin, Hyun-Mook Cho, Te-Won Lee
-
Publication number: 20160210451Abstract: A method, performed by an electronic device, for verifying a user to allow access to the electronic device is disclosed. In this method, sensor data may be received from a plurality of sensors including at least an image sensor and a sound sensor. Context information of the electronic device may be determined based on the sensor data and at least one verification unit may be selected from a plurality of verification units based on the context information. Based on the sensor data from at least one of the image sensor or the sound sensor, the at least one selected verification unit may calculate at least one verification value. The method may determine whether to allow the user to access the electronic device based on the at least one verification value and the context information.Type: ApplicationFiled: January 15, 2015Publication date: July 21, 2016Inventors: Kyu Woong Hwang, Seungwoo Yoo, Duck-Hoon Kim, Sungwoong Kim, Te-Won Lee