Patents by Inventor Tong Tang
Tong Tang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240430383Abstract: Embodiment systems and methods for dynamically rendering elements in a virtual environment rendered by the computing device may include monitoring interactions of participants in a virtual environment related to an element or elements presented in the virtual environment, identifying an agreement about the element or elements between at least two of the participants based on the interactions, and altering a presentation of the element or elements in the virtual environment based on the agreement by at least two of the participants.Type: ApplicationFiled: June 23, 2023Publication date: December 26, 2024Inventors: Vinesh SUKUMAR, Jonathan KIES, Vikram GUPTA, Tong TANG, Ananthapadmanabhan Arasanipalai KANDHADAI, Vasudev BHASKARAN
-
Publication number: 20240420426Abstract: Techniques and systems are provided for displaying content by an extended reality apparatus. For instance, a process can include receiving content for display, wherein the content is associated with a first object and a first transition, the first transition indicating a change to apply to the content based on a trigger condition; outputting the content for display in association with the first object; determining that the trigger condition has been satisfied; in response to a determination that the trigger condition has been satisfied, modifying the content for presentation based on the first transition; and outputting the modified content for display.Type: ApplicationFiled: June 13, 2023Publication date: December 19, 2024Inventors: Jonathan KIES, Vinesh SUKUMAR, Michael Franco TAVEIRA, Tong TANG, Vikram GUPTA
-
Publication number: 20240221743Abstract: Embodiments include methods of voice or speech recognition in varied environments and/or user emotional states executed by a processor of a computing device. The processor of a computing device may determine a voice or speech recognition threshold for voice or speech recognition based on information obtained from contextual information detected in an environment from which a received audio input was captured by the computing device and an emotional classification of a user's voice in the received audio input. The processor may determine a confidence score for one or more key words identified in the received audio input. The processor may then output results of a voice or speech recognition analysis of the received audio input in response to the determined confidence score exceeding the determined voice or speech recognition threshold.Type: ApplicationFiled: July 27, 2021Publication date: July 4, 2024Inventors: Jun WEI, Xiaoxia DONG, Qimeng PAN, Kwihyuk JIN, Tong TANG
-
Publication number: 20240105206Abstract: Certain aspects of the present disclosure provide techniques and apparatus for improved machine learning. Voice data from a first user is received. In response to determining that the voice data includes an utterance of a defined keyword, a user verification score is generated by processing the voice data using a first user verification machine learning (ML) model, and a quality of the voice data is determined. In response to determining that the user verification score and determined quality satisfy one or more defined criteria, a second user verification ML model is updated based on the voice data.Type: ApplicationFiled: September 23, 2022Publication date: March 28, 2024Inventors: Hesu HUANG, Leonid SHEYNBLAT, Vinesh SUKUMAR, Ziad ASGHAR, Joel LINSKY, Justin MCGLOIN, Tong TANG
-
Publication number: 20240103119Abstract: Certain aspects of the present disclosure provide techniques for training and using machine learning models to predict locations of stationary and non-stationary objects in a spatial environment. An example method generally includes measuring, by a device, a plurality of signals within a spatial environment. Timing information is extracted from the measured plurality of signals. Based on a machine learning model, the measured plurality of signals within the spatial environment, and the extracted timing information, locations of stationary reflection points and locations of non-stationary reflection points in the spatial environment are predicted. One or more actions are taken by the device based on predicting the locations of stationary reflection points and non-stationary reflection points in the spatial environment.Type: ApplicationFiled: September 23, 2022Publication date: March 28, 2024Inventors: Jamie Menjay LIN, Tong TANG
-
Publication number: 20240104420Abstract: Certain aspects of the present disclosure provide techniques and apparatus for a training and using machine learning models in multi-device network environments. An example computer-implemented method for network communications performed by a host device includes extracting a feature set from a data set associated with a client device using a client-device-specific feature extractor, wherein the feature set comprises a subset of features in a common feature space, training a task-specific model based on the extracted feature set and one or more other feature sets associated with other client devices, wherein the feature sets associated with the other client devices comprise one or more subsets of features in the common feature space, and deploying, to each respective client device of a plurality of client devices, a respective version of the task-specific model.Type: ApplicationFiled: September 23, 2022Publication date: March 28, 2024Inventors: Kyu Woong HWANG, Seunghan YANG, Hyunsin PARK, Leonid SHEYNBLAT, Vinesh SUKUMAR, Ziad ASGHAR, Justin MCGLOIN, Joel LINSKY, Tong TANG
-
Publication number: 20230411022Abstract: An implementation method of an intelligent question-answering system for prostate cancer includes: acquiring lifestyle data for prostate cancer based on a preset first data source; using the lifestyle data for prostate cancer as metadata to construct a lifestyle knowledge base; constructing a lifestyle knowledge graph based on a preset second data source and the lifestyle knowledge base; and fusing the lifestyle knowledge base and the lifestyle knowledge graph to obtain the intelligent question-answering system for prostate cancer. The intelligent question-answering system for prostate cancer can help clinicians, medical staff, scientific researchers, and patients, as well as the general public, to acquire objective lifestyle data conveniently and efficiently, and accurately assess the impact on prostate cancer.Type: ApplicationFiled: May 10, 2023Publication date: December 21, 2023Inventors: Bairong Shen, Tong Tang, Jiao Wang, Xingyun Liu, Rongrong Wu, Mengqiao He, Fei Ye, Yingbo Zhang
-
Publication number: 20230300329Abstract: A picture processing method and a video decoder are provided in the disclosure. In this disclosure, a joint asymptotic feature of a boundary to-be-filtered is determined according to sample values of samples at two sides of the boundary to-be-filtered. The joint asymptotic feature can truly reflect whether the boundary to-be-filtered is a real boundary. Whether to filter the boundary to-be-filtered is determined according to the joint asymptotic feature.Type: ApplicationFiled: May 24, 2023Publication date: September 21, 2023Inventor: Tong TANG
-
Patent number: 11587363Abstract: The invention relates to a sensor system for checking a vein pattern. The sensor system comprises a first light source that is configured to emit during operation across the entire surface electromagnetic waves with wavelengths in the near infrared range, which are absorbed by hemoglobin. Furthermore, the sensor system comprises a second light source that is configured to emit during operation across the entire surface electromagnetic waves with wavelengths in the range of visible light. Furthermore, the sensor system comprises a camera with a first camera chip that is configured to record reflected electromagnetic waves with wavelengths in the near infrared range and to convert them into a corresponding infrared image, and with a second camera chip that is configured to record reflected electromagnetic waves with wavelengths in the range of visible light and to convert them into a corresponding photographic image.Type: GrantFiled: December 9, 2020Date of Patent: February 21, 2023Assignee: IRIS-GMBH INFRARED & INTELLIGENT SENSORSInventors: Christian Kressing, Tong Tang, Narges Baharestani, Thomas Trull
-
Publication number: 20210174107Abstract: The invention relates to a sensor system for checking a vein pattern. The sensor system comprises a first light source that is configured to emit during operation across the entire surface electromagnetic waves with wavelengths in the near infrared range, which are absorbed by hemoglobin. Furthermore, the sensor system comprises a second light source that is configured to emit during operation across the entire surface electromagnetic waves with wavelengths in the range of visible light. Furthermore, the sensor system comprises a camera with a first camera chip that is configured to record reflected electromagnetic waves with wavelengths in the near infrared range and to convert them into a corresponding infrared image, and with a second camera chip that is configured to record reflected electrom agnetic waves with wavelengths in the range of visible light and to convert them into a corresponding photographic image.Type: ApplicationFiled: December 9, 2020Publication date: June 10, 2021Inventors: Christian Kressing, Tong Tang, Narges Baharestani, Thomas Trull
-
Patent number: 11009715Abstract: Systems and methods for fitting a wearable heads-up display (WHUD) to a subject are described. Imaging data representative of at least a portion of the subject's head with one or more gaze positions of the eyes is obtained from a plurality of cameras. One or more models representative of the subject's head are generated and a set of features is recognized in the one or more models. One or more models of the WHUD are also obtained based on WHUD data stored in memory. One or more simulations are performed positioning one or more WHUD models in proximity to at least one subject model based at least in part on the set of features recognized in the subject model. A fit of a WHUD to the subject is evaluated based at least in part on a determination regarding whether the simulation satisfies one or more of a set of criteria.Type: GrantFiled: September 11, 2019Date of Patent: May 18, 2021Assignee: Google LLCInventors: Samuel Jewell Lochner, Sui Tong Tang, Kevin R. Moule, Lukas Rezek, Ehsan Parvizi, Idris S. Aleem
-
Publication number: 20200081260Abstract: Systems and methods for fitting a wearable heads-up display (WHUD) to a subject are described. Imaging data representative of at least a portion of the subject's head with one or more gaze positions of the eyes is obtained from a plurality of cameras. One or more models representative of the subject's head are generated and a set of features is recognized in the one or more models. One or more models of the WHUD are also obtained based on WHUD data stored in memory. One or more simulations are performed positioning one or more WHUD models in proximity to at least one subject model based at least in part on the set of features recognized in the subject model. A fit of a WHUD to the subject is evaluated based at least in part on a determination regarding whether the simulation satisfies one or more of a set of criteria.Type: ApplicationFiled: September 11, 2019Publication date: March 12, 2020Inventors: Samuel Jewell Lochner, Sui Tong Tang, Kevin R. Moule, Lukas Rezek, Ehsan Parvizi, Idris S. Aleem
-
Patent number: 10547793Abstract: A camera unit and method of control are described. The camera unit has a camera flash sub-unit configurable to emit flash light having an adjustable characteristic. A camera sensor sub-unit generates raw color data when exposed to light for processing into a digital image. The camera unit also includes a camera controller for coordinating operation of the camera flash sub-unit and the camera sensor sub-unit. The camera controller monitors one or more ambient light characteristics in a vicinity of the camera unit. Prior to receiving a command instructing the camera unit to generate the digital image, the camera controller repeatedly configures the camera flash sub-unit based on the monitored ambient light characteristics to adjust the characteristics of the emitted flash light. Once the camera controller receives the command, the camera sensor sub-unit is instructed to expose an image sensor using the pre-adjusted camera flash light to increase illumination.Type: GrantFiled: May 4, 2017Date of Patent: January 28, 2020Assignee: BlackBerry LimitedInventors: Qian Wang, Yun Seok Choi, Graham Charles Townsend, Sui Tong Tang
-
Patent number: 10096115Abstract: A method for building a depth map is operable on a mobile device having a single camera integrated therewith. The method includes capturing a plurality of images of a given view using movement of the mobile device between images, capturing data regarding the movement of the mobile device during capture of the plurality of images, determining a relative position of the mobile device corresponding to each of the plurality of images, and building a depth map using the plurality of images and the relative position corresponding to each of the plurality of images.Type: GrantFiled: April 10, 2015Date of Patent: October 9, 2018Assignee: BlackBerry LimitedInventors: Thomas Guillaume Grandin, Sui Tong Tang, Sung Ho Hong
-
Publication number: 20180103194Abstract: Image capture systems, devices, and methods that automatically focus on objects in the user's field of view based on where the user is looking/gazing are described. The image capture system includes an eye tracker subsystem in communication with an autofocus camera to facilitate effortless and precise focusing of the autofocus camera on objects of interest to the user. The autofocus camera automatically focuses on what the user is looking at based on gaze direction determined by the eye tracker subsystem and one or more focus property(ies) of the object, such as its physical distance or light characteristics such as contrast and/or phase. The image capture system is particularly well-suited for use in a wearable heads-up display to capture focused images of objects in the user's field of view with minimal intervention from the user.Type: ApplicationFiled: December 12, 2017Publication date: April 12, 2018Inventor: Sui Tong Tang
-
Publication number: 20180103193Abstract: Image capture systems, devices, and methods that automatically focus on objects in the user's field of view based on where the user is looking/gazing are described. The image capture system includes an eye tracker subsystem in communication with an autofocus camera to facilitate effortless and precise focusing of the autofocus camera on objects of interest to the user. The autofocus camera automatically focuses on what the user is looking at based on gaze direction determined by the eye tracker subsystem and one or more focus property(ies) of the object, such as its physical distance or light characteristics such as contrast and/or phase. The image capture system is particularly well-suited for use in a wearable heads-up display to capture focused images of objects in the user's field of view with minimal intervention from the user.Type: ApplicationFiled: December 12, 2017Publication date: April 12, 2018Inventor: Sui Tong Tang
-
Publication number: 20180007255Abstract: Image capture systems, devices, and methods that automatically focus on objects in the user's field of view based on where the user is looking/gazing are described. The image capture system includes an eye tracker subsystem in communication with an autofocus camera to facilitate effortless and precise focusing of the autofocus camera on objects of interest to the user. The autofocus camera automatically focuses on what the user is looking at based on gaze direction determined by the eye tracker subsystem and one or more focus property(ies) of the object, such as its physical distance or light characteristics such as contrast and/or phase. The image capture system is particularly well-suited for use in a wearable heads-up display to capture focused images of objects in the user's field of view with minimal intervention from the user.Type: ApplicationFiled: June 30, 2017Publication date: January 4, 2018Inventor: Sui Tong Tang
-
Patent number: 9848125Abstract: A device with a front facing camera having two discrete focus positions is provided. The device comprises: a chassis comprising a front side; a display and a camera each at least partially located on the front side; the camera adjacent the display, the camera facing in a same direction as the display, the camera configured to: acquire video in a video mode; acquire an image in a camera mode; and, discretely step between a first and second focus position, the first position comprising a depth of field (“DOF”) at a hyperfocal distance and the second position comprising a DOF in a range of about 20 cm to about 1 meter; and, a processor configured to: when the camera is in the camera mode, automatically control the camera to the first position; and, when the camera is in the video mode, automatically control the camera to the second position.Type: GrantFiled: April 4, 2017Date of Patent: December 19, 2017Assignee: BLACKBERRY LIMITEDInventors: Yun Seok Choi, Sui Tong Tang, Thomas Guillaume Grandin, Arnett Ryan Weber
-
Publication number: 20170237889Abstract: A camera unit and method of control are described. The camera unit has a camera flash sub-unit configurable to emit flash light having an adjustable characteristic. A camera sensor sub-unit generates raw color data when exposed to light for processing into a digital image. The camera unit also includes a camera controller for coordinating operation of the camera flash sub-unit and the camera sensor sub-unit. The camera controller monitors one or more ambient light characteristics in a vicinity of the camera unit. Prior to receiving a command instructing the camera unit to generate the digital image, the camera controller repeatedly configures the camera flash sub-unit based on the monitored ambient light characteristics to adjust the characteristics of the emitted flash light. Once the camera controller receives the command, the camera sensor sub-unit is instructed to expose an image sensor using the pre-adjusted camera flash light to increase illumination.Type: ApplicationFiled: May 4, 2017Publication date: August 17, 2017Applicant: BlackBerry LimitedInventors: Qian WANG, Yun Seok CHOI, Graham Charles TOWNSEND, Sui Tong TANG
-
Publication number: 20170208249Abstract: A device with a front facing camera having two discrete focus positions is provided. The device comprises: a chassis comprising a front side; a display and a camera each at least partially located on the front side; the camera adjacent the display, the camera facing in a same direction as the display, the camera configured to: acquire video in a video mode; acquire an image in a camera mode; and, discretely step between a first and second focus position, the first position comprising a depth of field (“DOF”) at a hyperfocal distance and the second position comprising a DOF in a range of about 20 cm to about 1 metre; and, a processor configured to: when the camera is in the camera mode, automatically control the camera to the first position; and, when the camera is in the video mode, automatically control the camera to the second position.Type: ApplicationFiled: April 4, 2017Publication date: July 20, 2017Inventors: Yun Seok CHOI, Sui Tong TANG, Thomas Guillaume GRANDIN, Arnett Ryan WEBER