Patents by Inventor Wen-Ling Hsu
Wen-Ling Hsu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240104467Abstract: Tasks associated with users can be managed for efficient workflow management. A task management component (TMC) can analyze, including performing artificial intelligence-based analysis on, task-related information relating to associated with a user(s), assessment information relating to assessing performance or expertise associated with a task, biometric information relating to health, diet, and activity associated with the user(s), and/or user(s) feedback information. Based on the analysis, TMC can adaptively adjust respective attributes associated with respective tasks, resulting in respective adjusted attributes associated with the respective tasks. Based on the respective adjusted attributes, TMC can determine task information and can present the task information to a device(s) associated with the user(s) to facilitate performance of the tasks.Type: ApplicationFiled: September 22, 2022Publication date: March 28, 2024Inventors: Aritra Guha, Zhengyi Zhou, Jean-Francois Paiement, Eric Zavesky, Jianxiong Dong, Wen-Ling Hsu, Qiong Wu, Louis Alexander
-
Publication number: 20240104858Abstract: Creating scent models and using scent models in cross-reality environments can include capturing experience data identifying a scent detected and a context in which the scent was detected. The experience data can be provided to a scent modeling service to generate a scent model that can represent perceived scents and perceived scent intensities for a user. The scent model can be used to generate cross-reality session data to be used in a cross-reality session presented by a cross-reality device. The cross-reality device can include a scent generator and can generate the cross-reality session using data obtained from the user device. The cross-reality device can generate a further scent during the cross-reality session based on the scent model.Type: ApplicationFiled: September 23, 2022Publication date: March 28, 2024Applicant: AT&T Intellectual Property I, L.P.Inventors: Wen-Ling Hsu, Eric Zavesky, Louis Alexander, Aritra Guha, Jean-Francois Paiement, Qiong Wu, Zhengyi Zhou
-
Patent number: 11931187Abstract: A method for predicting clinical severity of a neurological disorder includes steps of: a) identifying, according to a magnetic resonance imaging (MRI) image of a brain, brain image regions each of which contains a respective portion of diffusion index values of a diffusion index, which results from image processing performed on the MRI image; b) for one of the brain image regions, calculating a characteristic parameter based on the respective portion of the diffusion index values; and c) calculating a severity score that represents the clinical severity of the neurological disorder of the brain based on the characteristic parameter of the one of the brain image regions via a prediction model associated with the neurological disorder.Type: GrantFiled: March 16, 2018Date of Patent: March 19, 2024Assignees: Chang Gung Medical Foundation Chang Gung Memorial Hospital at Keelung, Chang Gung Memorial Hospital, Linkou, Chang Gung UniversityInventors: Jiun-Jie Wang, Yi-Hsin Weng, Shu-Hang Ng, Jur-Shan Cheng, Yi-Ming Wu, Yao-Liang Chen, Wey-Yil Lin, Chin-Song Lu, Wen-Chuin Hsu, Chia-Ling Chen, Yi-Chun Chen, Sung-Han Lin, Chih-Chien Tsai
-
Publication number: 20240088246Abstract: Various embodiments of the present application are directed towards a control gate layout to improve an etch process window for word lines. In some embodiments, an integrated chip comprises a memory array, an erase gate, a word line, and a control gate. The memory array comprises a plurality of cells in a plurality of rows and a plurality of columns. The erase gate and the word line are elongated in parallel along a row of the memory array. The control gate is elongated along the row and is between and borders the erase gate and the word line. Further, the control gate has a pad region protruding towards the erase gate and the word line. Because the pad region protrudes towards the erase gate and the word line, a width of the pad region is spread between word-line and erase-gate sides of the control gate.Type: ApplicationFiled: November 16, 2023Publication date: March 14, 2024Inventors: Yu-Ling Hsu, Ping-Cheng Li, Hung-Ling Shih, Po-Wei Liu, Wen-Tuo Huang, Yong-Shiuan Tsair, Chia-Sheng Lin, Shih Kuang Yang
-
Patent number: 11923338Abstract: A method includes bonding a first wafer to a second wafer, with a first plurality of dielectric layers in the first wafer and a second plurality of dielectric layers in the second wafer bonded between a first substrate of the first wafer and a second substrate in the second wafer. A first opening is formed in the first substrate, and the first plurality of dielectric layers and the second wafer are etched through the first opening to form a second opening. A metal pad in the second plurality of dielectric layers is exposed to the second opening. A conductive plug is formed extending into the first and the second openings.Type: GrantFiled: April 20, 2020Date of Patent: March 5, 2024Assignee: Taiwan Semiconductor Manufacturing Company, Ltd.Inventors: Cheng-Ying Ho, Jeng-Shyan Lin, Wen-I Hsu, Feng-Chi Hung, Dun-Nian Yaung, Ying-Ling Tsai
-
Publication number: 20230408283Abstract: Extended reality augmentation of situational navigation (e.g., using a computerized tool) is enabled. For example, a system can comprise: a processor and a memory that stores executable instructions that, when executed by the processor, facilitate performance of operations, comprising: based on a route of a vehicle, determining a navigational instruction to be displayed via an augmented reality interface of the vehicle, and displaying the navigational instruction via the augmented reality interface, wherein displaying the navigational instruction comprises displaying a virtual leading vehicle to be followed by the vehicle.Type: ApplicationFiled: June 16, 2022Publication date: December 21, 2023Inventors: Qiong Wu, Aritra Guha, Eric Zavesky, Wen-Ling Hsu, Zhengyi Zhou, Louis Alexander, Jean-Francois Paiement
-
Publication number: 20230410159Abstract: Aspects of the subject disclosure may include, for example, obtaining contextual information associated with a user, wherein the user is engaged in an immersive environment using a target user device, and wherein the contextual information comprises user profile data, data regarding a location of the user, data regarding one or more inputs provided by the user, or a combination thereof, receiving data regarding a metaverse object in the immersive environment, determining a relevance of the metaverse object to the user based on the contextual information and the data regarding the metaverse object, responsive to the determining the relevance of the metaverse object to the user, generating a personalized recommendation or review of the metaverse object for the user, and causing the personalized recommendation or review to be provided to the user in the immersive environment for user consumption. Other embodiments are disclosed.Type: ApplicationFiled: June 15, 2022Publication date: December 21, 2023Applicant: AT&T Intellectual Property I, L.P.Inventors: Eric Zavesky, Jean-Francois Paiement, Aritra Guha, Qiong Wu, Wen-Ling Hsu, Jianxiong Dong, Tan Xu
-
Patent number: 11848828Abstract: An artificial intelligence (AI) automation to improve network quality based on predicted locations is provided. A method can include training, by a first device comprising a processor and according to model configuration parameters received from a second device that is not the first device, a local machine learning model with training data derived from first location data collected by the first device; transmitting, by the first device to the second device, anonymized model features associated with the local machine learning model; in response to the transmitting of the anonymized model features, receiving, by the first device from the second device, an aggregated machine learning model; and estimating, by the first device, a future position of the first device by applying the aggregated machine learning model to second location data collected by the first device.Type: GrantFiled: August 23, 2022Date of Patent: December 19, 2023Assignees: AT&T Intellectual Property I, L.P., NEW JERSEY INSTITUTE OF TECHNOLOGYInventors: Manoop Talasila, Anwar Syed Aftab, Wen-Ling Hsu, Cristian Borcea, Yi Chen, Xiaopeng Jiang, Shuai Zhao, Guy Jacobson, Rittwik Jana
-
Publication number: 20230370807Abstract: In one example, a method performed by a processing system including at least one processor includes downloading at least one digital resource relating to a user-defined location, acquiring data about the user-defined location from a sensor located in the user-defined location, while the user is present in the user-defined location, detecting a user need based on the data from the sensor, augmenting the at least one digital resource with the data from the sensor to produce content that is responsive to the user need, and presenting the content that is responsive to the user need.Type: ApplicationFiled: May 12, 2022Publication date: November 16, 2023Inventors: Eric Zavesky, Louis Alexander, Jean-Francois Paiement, Wen-Ling Hsu, David Gibbon, Jianxiong Dong
-
Publication number: 20230362070Abstract: Aspects of the subject disclosure may include, for example, obtaining a list of a plurality of communication channels, each communication channel being associated with at least one respective feature of a plurality of features; correlating each communication channel in the list with at least one respective user response of a plurality of user responses; receiving an identification of a new communication channel that does not exist in the list, the new communication channel being associated with at least one feature of the plurality of features; determining, based upon the at least one feature that is associated with the new communication channel, with which one or more of the plurality of communication channels in the list the new communication channel is similar, resulting in a determination; and assigning, based upon the determination, at least one of the plurality of user responses to the new communication channel. Other embodiments are disclosed.Type: ApplicationFiled: May 6, 2022Publication date: November 9, 2023Applicant: AT&T Intellectual Property I, L.P.Inventors: Wen-Ling Hsu, Jing Guo, Wen Wang, Zhengyi Zhou, Eric Zavesky
-
Publication number: 20230360057Abstract: Aspects of the subject disclosure may include, for example, analyzing, by a processing system, a customer message to determine a customer care issue and a customer sentiment, and generating a response to engage with the customer based on the customer sentiment; the response includes the information to address the customer care issue. The system also delivers the response to the equipment of the customer; monitors for a customer reaction to the delivered response to determine a change in the customer sentiment; and generates an additional response to further engage with the customer based on the change in the customer sentiment. Other embodiments are disclosed.Type: ApplicationFiled: May 4, 2022Publication date: November 9, 2023Applicants: AT&T Communications Services India Private Limited, AT&T Intellectual Property I, L.P.Inventors: Wen-Ling Hsu, Zhengyi Zhou, Eric Zavesky, Hasanathullah Inayathullah Mohammed, Jing Guo, Nishant Bhadauria, Amarendra Mahapatra, Nancy Herrero, Sarah Green
-
Publication number: 20230325845Abstract: Aspects of the subject disclosure may include, for example, a method in which a processing system analyzes data including a user profile and historical data relating to previous interactions between an automated agent and equipment of the user. The method also includes determining a desirable outcome of an interaction between the automated agent and the user equipment; constructing a model for generating an expected outcome of a step of the interaction; using the model to perform a simulation of a next step of the interaction by generating an expected outcome for each of a plurality of possible actions, resulting in a plurality of expected outcomes; and selecting a next action for the next step of the interaction. If the desirable outcome is not obtained, the system can refine the plurality of possible actions to perform a simulation of a subsequent step of the interaction. Other embodiments are disclosed.Type: ApplicationFiled: April 11, 2022Publication date: October 12, 2023Applicant: AT&T Intellectual Property I, L.P.Inventors: Zhengyi Zhou, Jing Guo, Wen-Ling Hsu, Eric Zavesky
-
Publication number: 20230224452Abstract: A volumetric content enhancement system (“the system”) can annotate at least a portion of a plurality of voxels from a volumetric video with contextual data. The system can determine at least one actionable position within the volumetric video. The system can create an annotated volumetric video that includes the volumetric video, an annotation with the contextual data, and the at least one actionable position. The system can provide the annotated volumetric video to a volumetric content playback system. The system can obtain viewer feedback associated with the viewer and can determine an emotional state of the viewer based, at least in part, upon the viewer feedback. The system can receive viewer position information that identifies a specific actionable position of the viewer. The system can generate manipulation instructions to instruct the volumetric content playback system to manipulate the annotated volumetric content to achieve a desired emotional state of the viewer.Type: ApplicationFiled: February 27, 2023Publication date: July 13, 2023Applicant: AT&T Intellectual Property I, L.P.Inventors: Eric Zavesky, David Gibbon, Wen-Ling Hsu, Jianxiong Dong, Richard Palazzo
-
Patent number: 11671575Abstract: In one example, a method performed by a processing system including at least one processor includes acquiring a first item of media content from a user, where the first item of media content depicts a subject, acquiring a second item of media content, where the second item of media content depicts the subject, compositing the first item of media content and the second item of media content to create, within a metaverse of immersive content, an item of immersive content that depicts the subject, presenting the item of immersive content on a device operated by the user, and adapting the presenting of the item of immersive content in response to a choice made by the user.Type: GrantFiled: September 9, 2021Date of Patent: June 6, 2023Assignee: AT&T Intellectual Property I, L.P.Inventors: Eric Zavesky, Louis Alexander, David Gibbon, Wen-Ling Hsu, Tan Xu, Mohammed Abdel-Wahab, Subhabrata Majumdar, Richard Palazzo
-
Publication number: 20230128178Abstract: A method may include receiving current environment condition information associated with an extended reality device; receiving historical environment condition information associated with the extended reality device; based on current environment condition information and the historical environment condition information, determining one or more adjustments to meet a performance threshold for rendering objects on the an extended reality device or using the an extended reality device; and sending instructions to implement the one or more adjustments to meet the performance threshold for rendering objects on the extended reality device or using the extended reality device.Type: ApplicationFiled: October 21, 2021Publication date: April 27, 2023Inventors: Eric Zavesky, Wen-Ling Hsu, Tan Xu
-
Publication number: 20230120772Abstract: Aspects of the subject disclosure may include, for example, obtaining a first group of volumetric content, generating first metadata for the first group of volumetric content, and storing the first group of volumetric content. Further embodiments include obtaining a second group of volumetric content, generating second metadata for the second group of volumetric content, and storing the second group of volumetric content.Type: ApplicationFiled: October 14, 2021Publication date: April 20, 2023Applicant: AT&T Intellectual Property I, L.P.Inventors: Eric Zavesky, David Crawford Gibbon, Tan Xu, Wen-Ling Hsu, Richard Palazzo
-
Publication number: 20230070050Abstract: In one example, a method performed by a processing system including at least one processor includes acquiring a first item of media content from a user, where the first item of media content depicts a subject, acquiring a second item of media content, where the second item of media content depicts the subject, compositing the first item of media content and the second item of media content to create, within a metaverse of immersive content, an item of immersive content that depicts the subject, presenting the item of immersive content on a device operated by the user, and adapting the presenting of the item of immersive content in response to a choice made by the user.Type: ApplicationFiled: September 9, 2021Publication date: March 9, 2023Inventors: Eric Zavesky, Louis Alexander, David Gibbon, Wen-Ling Hsu, Tan Xu, Mohammed Abdel-Wahab, Subhabrata Majumdar, Richard Palazzo
-
Patent number: 11595636Abstract: A volumetric content enhancement system (“the system”) can annotate at least a portion of a plurality of voxels from a volumetric video with contextual data. The system can determine at least one actionable position within the volumetric video. The system can create an annotated volumetric video that includes the volumetric video, an annotation with the contextual data, and the at least one actionable position. The system can provide the annotated volumetric video to a volumetric content playback system. The system can obtain viewer feedback associated with the viewer and can determine an emotional state of the viewer based, at least in part, upon the viewer feedback. The system can receive viewer position information that identifies a specific actionable position of the viewer. The system can generate manipulation instructions to instruct the volumetric content playback system to manipulate the annotated volumetric content to achieve a desired emotional state of the viewer.Type: GrantFiled: August 23, 2021Date of Patent: February 28, 2023Assignee: AT&T Intellectual Property I, L.P.Inventors: Eric Zavesky, David Gibbon, Wen-Ling Hsu, Jianxiong Dong, Richard Palazzo
-
Publication number: 20230059361Abstract: In one example, a method performed by a processing system including at least one processor includes rendering an extended reality environment including a first object associated with a first media franchise, identifying a second object to replace the first object in the extended reality environment, wherein the second object is associated with at least one of: the first media franchise or a second media franchise different from the first media franchise, and rendering the second object in the extended reality media in place of the first object.Type: ApplicationFiled: August 21, 2021Publication date: February 23, 2023Inventors: Eric Zavesky, Richard Palazzo, Wen-Ling Hsu, Tan Xu, Mohammed Abdel-Wahab
-
Publication number: 20230057722Abstract: A volumetric content enhancement system (“the system”) can annotate at least a portion of a plurality of voxels from a volumetric video with contextual data. The system can determine at least one actionable position within the volumetric video. The system can create an annotated volumetric video that includes the volumetric video, an annotation with the contextual data, and the at least one actionable position. The system can provide the annotated volumetric video to a volumetric content playback system. The system can obtain viewer feedback associated with the viewer and can determine an emotional state of the viewer based, at least in part, upon the viewer feedback. The system can receive viewer position information that identifies a specific actionable position of the viewer. The system can generate manipulation instructions to instruct the volumetric content playback system to manipulate the annotated volumetric content to achieve a desired emotional state of the viewer.Type: ApplicationFiled: August 23, 2021Publication date: February 23, 2023Applicant: AT&T Intellectual Property I, L.P.Inventors: Eric Zavesky, David Gibbon, Wen-Ling Hsu, Jianxiong Dong, Richard Palazzo