Patents Examined by Yi Yang
-
Patent number: 11947882Abstract: Techniques that facilitate optimization of prototype and machine design within a three-dimensional fluid modeling environment are presented. For example, a system includes a modeling component, a machine learning component, and a graphical user interface component. The modeling component generates three-dimensional model of a mechanical device based on a library of stored data elements. The machine learning component predicts one or more characteristics of the mechanical device based on a first machine learning process associated with the three-dimensional model. The machine learning component also generates physics modeling data of the mechanical device based on the one or more characteristics of the mechanical device. The graphical user interface component provides, via a graphical user interface, a three-dimensional design environment associated with the three-dimensional model and a probabilistic simulation environment associated with optimization of the three-dimensional model.Type: GrantFiled: April 12, 2021Date of Patent: April 2, 2024Assignee: Altair Engineering, Inc.Inventors: Zain S. Dweik, Vijay Sethuraman, Berkay Elbir
-
Patent number: 11935193Abstract: Various techniques associated with automatic mesh generation are disclosed. One or more center curves of an outline of an object or figure are first determined. Next, for each of a plurality of points of each of the one or more center curves, a pair of rays is cast from a center curve in opposite directions, wherein the rays collide with opposite sides of the outline, and a collision pair is generated that comprises a line connecting collision points of the pair of rays on opposite sides of the outline. A mesh model of the object or figure is generated by mapping each of a set of collision pairs to polygons used to define the mesh model.Type: GrantFiled: March 16, 2020Date of Patent: March 19, 2024Assignee: Outward, Inc.Inventors: Clarence Chui, Christopher Murphy
-
Patent number: 11928773Abstract: In various embodiments, a training application generates a trained encoder that automatically generates shape embeddings having a first size and representing three-dimensional (3D) geometry shapes. First, the training application generates a different view activation for each of multiple views associated with a first 3D geometry based on a first convolutional neural network (CNN) block. The training application then aggregates the view activations to generate a tiled activation. Subsequently, the training application generates a first shape embedding having the first size based on the tiled activation and a second CNN block. The training application then generates multiple re-constructed views based on the first shape embedding. The training application performs training operation(s) on at least one of the first CNN block and the second CNN block based on the views and the re-constructed views to generate the trained encoder.Type: GrantFiled: February 23, 2022Date of Patent: March 12, 2024Assignee: AUTODESK, INC.Inventors: Thomas Ryan Davies, Michael Haley, Ara Danielyan, Morgan Fabian
-
Patent number: 11928756Abstract: To present augmented reality features without localizing a user, a client device receives a request for presenting augmented reality features in a camera view of a computing device of the user. Prior to localizing the user, the client device obtains sensor data indicative of a pose of the user, and determines the pose of the user based on the sensor data with a confidence level that exceeds a confidence threshold which indicates a low accuracy state. Then the client device presents one or more augmented reality features in the camera view in accordance with the determined pose of the user while in the low accuracy state.Type: GrantFiled: September 22, 2021Date of Patent: March 12, 2024Assignee: GOOGLE LLCInventors: Mohamed Suhail Mohamed Yousuf Sait, Andre Le, Juan David Hincapie, Mirko Ranieri, Marek Gorecki, Wenli Zhao, Tony Shih, Bo Zhang, Alan Sheridan, Matt Seegmiller
-
Patent number: 11922575Abstract: Approaches described and suggested herein relate to generating three-dimensional representations of objects to be used to render virtual reality and augmented reality effects on personal devices such as smartphones and personal computers, for example. An initial surface mesh of an object is obtained. A plurality of silhouette masks of the object taken from a plurality of viewpoints is also obtained. A plurality of depth maps are generated from the initial surface mesh. Specifically, the plurality of depth maps are taken from the same plurality of viewpoints from which the silhouette images are taken. A volume including the object is discretized into a plurality of voxels. Each voxel is then determined to be either inside the object or outside of the object based on the silhouette masks and the depth data. A final mesh is then generated from the voxels that are determined to be inside the object.Type: GrantFiled: March 12, 2021Date of Patent: March 5, 2024Assignee: A9.com, Inc.Inventors: Himanshu Arora, Divyansh Agarwal, Arnab Dhua, Chun Kai Wang
-
Patent number: 11922611Abstract: Various methods and systems are provided for accelerated image rendering with motion compensation. In one embodiment, a method comprises calculating motion between a preceding image frame and a target image frame to be rendered, rendering a small image with a size smaller than a target size of the target image frame based on the calculated motion, and generating the target image frame at the target size based on the small image, the calculated motion, and a reference image frame. In this way, high-quality image frames for a video stream may be generated with a reduced amount of rendering for each frame, thereby reducing the overall processing resources dedicated to rendering as well as the power consumption for image rendering.Type: GrantFiled: October 15, 2020Date of Patent: March 5, 2024Assignee: Pixelworks Semiconductor Technology (Shanghai) Co. Ltd.Inventors: Guohua Cheng, Junhua Chen, Neil Woodall, Hongmin Zhang, Yue Ma, Qinghai Wang
-
Patent number: 11900537Abstract: A method and system for providing the ability to deform and adapt a 3D model and associated 3D object to comply with a set of extrinsic and intrinsic constraints to guarantee function and fit with respect to a 3D target object. The method includes simplifying the 3D object with a topological simplification, identifying a number of constraint zones according to defined characteristics such as external rigid and non-rigid zones and internal rigid and non-rigid zones, and modifying the 3D model with respect to the constraint zones.Type: GrantFiled: May 25, 2021Date of Patent: February 13, 2024Assignee: Technologies Shapeshift 3D INC.Inventor: Jonathan Borduas
-
Patent number: 11890494Abstract: A retrofittable mount system for a mask having a mask window comprises a sensor removably mounted to the mask to collect information about an environment as sensor data, wherein the sensor is removably mounted to the mask with a first mount mechanism that does not penetrate the mask window. A processor is coupled to the sensor, wherein the processor executes one or more cognitive enhancement engines to process the sensor data into enhanced characterization data. An output device is removably mounted to the mask with a second mount mechanism without penetrating the mask window. The output device receives the enhanced characterization data from the processor and communicates the enhanced characterization data to a mask wearer, such that the enhanced characterization data is integrated into natural senses of the wearer and optimized for the performance of a specific task of the wearer to reduce the cognitive load of the wearer.Type: GrantFiled: April 3, 2019Date of Patent: February 6, 2024Assignee: Qwake Technologies, Inc.Inventors: Omer Haciomeroglu, John Davis Long, II, Michael Ralston, Sam J. Cossman
-
Patent number: 11893670Abstract: Provided are an animation generation method, apparatus, and system and a storage medium, relating to the field of animation technology. The method includes acquiring the real feature data of a real object, where the real feature data includes the action data and the face data of the real object during a performance process; determining the target feature data of a virtual character according to the real feature data, where the virtual character is a preset animation model, and the target feature data includes the action data and the face data of the virtual character; and generating the animation of the virtual character according to the target feature data. The performance of the real object is used for generating the animation of the virtual character.Type: GrantFiled: August 3, 2021Date of Patent: February 6, 2024Assignees: Mofa (Shanghai) Information Technology Co., Ltd., Shanghai Movu Technology Co., Ltd.Inventors: Jinxiang Chai, Wenping Zhao, Shihao Jin, Bo Liu, Tonghui Zhu, Hongbing Tan, Xingtang Xiong, Congyi Wang, Zhiyong Wang
-
Patent number: 11869467Abstract: An information processing apparatus 100 according to the present disclosure includes an extraction unit 131 that extracts first data from an element constituting first content, and a model generation unit 132 that generates a learned model that has a first encoder 50 that calculates a first feature quantity which is a feature quantity of first content, and a second encoder 55 that calculates a second feature quantity which is a feature quantity of the extracted first data.Type: GrantFiled: October 10, 2019Date of Patent: January 9, 2024Assignee: SONY CORPORATIONInventor: Taketo Akama
-
Patent number: 11854115Abstract: A vectorized caricature avatar generator receives a user image from which face parameters are generated. Segments of the user image including certain facial features (e.g., hair, facial hair, eyeglasses) are also identified. Segment parameter values are also determined, the segment parameter values being those parameter values from a set of caricature avatars that correspond to the segments of the user image. The face parameter values and the segment parameter values are used to generate a caricature avatar of the user in the user image.Type: GrantFiled: November 4, 2021Date of Patent: December 26, 2023Assignee: Adobe Inc.Inventors: Daichi Ito, Yijun Li, Yannick Hold-Geoffroy, Koki Madono, Jose Ignacio Echevarria Vallespi, Cameron Younger Smith
-
Patent number: 11846941Abstract: A monitoring system that is configured to monitor a property is disclosed. The monitoring system includes a sensor that is configured to generate sensor data that reflects an attribute of a property. The monitoring system further includes a drone that generates image data, location data, and orientation data. The monitoring system further includes a monitor control unit. The monitor control unit is configured to receive the sensor data, the location data, and the orientation data. The monitor control unit is configured to determine that an event has occurred at the property and a location of the event within the property. The monitor control unit is configured to generate a graphical overlay based on the event, the location data, and the orientation data. The monitor control unit is configured to generate a graphical interface. The monitor control unit is configured to output the graphical interface.Type: GrantFiled: November 9, 2021Date of Patent: December 19, 2023Assignee: Alarm.com IncorporatedInventors: Daniel Todd Kerzner, Donald Madden
-
Patent number: 11842447Abstract: Disclosed is a localization method and apparatus that may acquire localization information of a device, generate a first image that includes a directional characteristic corresponding to an object included in an input image, generate a second image in which the object is projected based on the localization information, to map data corresponding to a location of the object, and adjust the localization information based on visual alignment between the first image and the second image.Type: GrantFiled: June 1, 2021Date of Patent: December 12, 2023Assignee: Samsung Electronics Co., Ltd.Inventors: MinJung Son, Hyun Sung Chang
-
Patent number: 11837145Abstract: Provided is a display apparatus that superimposes an image on an observation target, and the display apparatus includes a display unit and a display controller. The display unit displays an object image. The display controller adjusts a display position of the object image in a display region of the display unit in a horizontal direction on a pixel line basis.Type: GrantFiled: October 18, 2019Date of Patent: December 5, 2023Assignee: SONY SEMICONDUCTOR SOLUTIONS CORPORATIONInventors: Takayuki Iyama, Hiroyuki Ozawa
-
Patent number: 11816922Abstract: A fingerprint extraction apparatus includes a fingerprint generation module configured to generate first and second virtual fingerprint images, perform primary image processing on the first virtual fingerprint image and primary and secondary image processing on the second virtual fingerprint images, and generate virtual overlapped fingerprint images by combining the first and second virtual fingerprint images on which image processing is performed; a machine learning module configured to generate a learning model by performing machine learning using the virtual overlapped fingerprint images as input data; and a fingerprint extraction module configured to extract a fingerprint located vertically on a center of a real image by inputting the real image to a target fingerprint extraction learning model, and the primary image processing comprises image processing on a curve forming a fingerprint, and the secondary image processing comprises image processing on a location of the fingerprint in the image.Type: GrantFiled: December 31, 2019Date of Patent: November 14, 2023Assignee: Seoul National University R&DB FoundationInventors: Byoung Ho Lee, Jae Bum Cho, Dong Heon Yoo, Min Seok Chae, Ju Hyun Lee
-
Patent number: 11810235Abstract: A method for establishing a complex motion controller includes following steps: obtaining a source controller and a destination controller, wherein the source controller is configured to generate a source motion, and the destination controller is configured to generate a destination motion; determining a transition tensor between the source controller and the destination controller, wherein the transition tensor has a plurality of indices, one of the plurality of indices corresponds to a plurality of phases of the source motion; calculating a plurality of transition outcomes of the transition tensor and recording the plurality of transition outcomes according to the plurality of indices; calculating a plurality of transition qualities according to the plurality of transition outcomes; and searching for an optimal transition quality from the plurality of transition qualities to establish a complex motion controller for generating a complex motion corresponding to one of the plurality of phases.Type: GrantFiled: December 10, 2021Date of Patent: November 7, 2023Assignees: INVENTEC (PUDONG) TECHNOLOGY CORPORATION, INVENTEC CORPORATIONInventors: Ying-sheng Luo, Jonathan Hans Soeseno, Trista Pei-Chun Chen, Wei-Chao Chen
-
Patent number: 11790568Abstract: Methods, computer program products, and/or systems are provided that perform the following operations: identifying two or more distinct elements in an image; generating a sub-image for each of the two or more distinct elements; generating adjectives descriptive of content associated with a distinct element for each sub-image; displaying a response list including the adjectives associated with the distinct element of a selected sub-image in response to an interaction with the image; obtaining annotation data based in part on the response list displayed for the distinct element of the selected sub-image; and assigning the annotation data to the distinct element of the selected sub-image, wherein the annotation data is displayed in response to an interaction with the distinct element in the image.Type: GrantFiled: March 29, 2021Date of Patent: October 17, 2023Assignee: KYNDRYL, INCInventors: Tiberiu Suto, Shikhar Kwatra, Vijay Ekambaram, Hemant Kumar Sivaswamy
-
Patent number: 11769237Abstract: A multimodal medical image fusion method based on a DARTS network is provided. Feature extraction is performed on a multimodal medical image by using a differentiable architecture search (DARTS) network. The network performs learning by using the gradient of network weight as a loss function in a search phase. A network architecture most suitable for a current dataset is selected from different convolution operations and connections between different nodes, so that features extracted by the network have richer details. In addition, a plurality of indicators that can represent image grayscale information, correlation, detail information, structural features, and image contrast are used as a network loss function, so that the effective fusion of medical images can be implemented in an unsupervised learning way without a gold standard.Type: GrantFiled: January 30, 2021Date of Patent: September 26, 2023Assignee: HUAZHONG UNIVERSITY OF SCIENCE AND TECHNOLOGYInventors: Xuming Zhang, Shaozhuang Ye
-
Patent number: 11763533Abstract: Embodiments of the present disclosure provide a display method based on augmented reality, a device, a storage medium, and a program product, a real-time scene image is acquired, then a head image of a target object is acquired from the real-time scene image if the real-time scene image includes a face image of the target object, where the head image of the target object includes the face image of the target object; a virtual image of the target object is generated according to the head image; and the virtual image of the target object is displayed in the real-time scene image based on an augmented reality display technology.Type: GrantFiled: March 15, 2022Date of Patent: September 19, 2023Assignee: Beijing Zitiao Network Technology Co., Ltd.Inventors: Zhixiong Lu, Zihao Chen
-
Patent number: 11741662Abstract: In various embodiments, a training application generates a trained encoder that automatically generates shape embeddings having a first size and representing three-dimensional (3D) geometry shapes. First, the training application generates a different view activation for each of multiple views associated with a first 3D geometry based on a first convolutional neural network (CNN) block. The training application then aggregates the view activations to generate a tiled activation. Subsequently, the training application generates a first shape embedding having the first size based on the tiled activation and a second CNN block. The training application then generates multiple re-constructed views based on the first shape embedding. The training application performs training operation(s) on at least one of the first CNN block and the second CNN block based on the views and the re-constructed views to generate the trained encoder.Type: GrantFiled: October 29, 2018Date of Patent: August 29, 2023Assignee: AUTODESK, INC.Inventors: Thomas Davies, Michael Haley, Ara Danielyan, Morgan Fabian