Patents by Inventor Zeng Dai
Zeng Dai has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11978143Abstract: The present disclosure describes techniques for creating videos using virtual characters. Creation of a video may be initiated by a user. Camera input comprising a human body of the user may be received. The camera input may be split into a first stream for removing the human body and a second stream for animating a virtual character in the video. An inpainting filter may be applied to remove the human body in real time from the camera input. The inpainting filter may be configured to accelerate texture sampling. Output of the inpainting filter may be blended with images comprised in the camera input to generate camera input backgrounds.Type: GrantFiled: May 23, 2022Date of Patent: May 7, 2024Assignee: LEMON INC.Inventors: Zeng Dai, Yunzhu Li, Nite Luo
-
Publication number: 20240078050Abstract: Container data sharing is provided. A second container of a cluster of containers is started to process a service request in response to detecting a failure of a first container processing the service request. The service request and data generated by the first container that failed stored on a physical external memory device is accessed. The service request and the data generated by the first container that failed is loaded on the second container from the physical external memory device via a dedicated hardware link for high-speed container failure recovery.Type: ApplicationFiled: September 1, 2022Publication date: March 7, 2024Inventors: Hui Wang, Yue Wang, Mai Zeng, Wei Li, Yu Mei Dai, Xiao Chen Huang
-
Publication number: 20230377556Abstract: The present disclosure describes techniques of generating voices for virtual characters. A plurality of source sounds may be received. The plurality of source sounds may correspond to a plurality of frames of a video. The video may comprise a virtual character. The plurality of source sounds may be converted into a plurality of representations in a latent space using a first model. Each representation among the plurality of representations may comprise a plurality of parameters. The plurality of parameters may correspond to a plurality of sound features. A plurality of sounds may be generated in real time for the virtual character in the video based at least in part on modifying at least one of the plurality of parameters of each representation.Type: ApplicationFiled: May 23, 2022Publication date: November 23, 2023Inventors: Zeng Dai, Chen Sun, Ari Shapiro, Kin Chung Wong, Weishan Yu, August Yadon
-
Publication number: 20230377236Abstract: The present disclosure describes techniques for creating videos using virtual characters. Creation of a video may be initiated by a user. Camera input comprising a human body of the user may be received. The camera input may be split into a first stream for removing the human body and a second stream for animating a virtual character in the video. An inpainting filter may be applied to remove the human body in real time from the camera input. The inpainting filter may be configured to accelerate texture sampling. Output of the inpainting filter may be blended with images comprised in the camera input to generate camera input backgrounds.Type: ApplicationFiled: May 23, 2022Publication date: November 23, 2023Inventors: Zeng Dai, Yunzhu Li, Nite Luo
-
Publication number: 20230222717Abstract: A method for generating a special effect prop includes displaying a special effect prop editing page and acquiring a first image, where the special effect prop editing page is provided with a flow information display region and an effect display region, where the flow information display region is used for displaying flow information of a prop to be edited, and the effect display region is used for displaying a second image, where the second image is a preview image obtained by processing the first image according to the flow information; in response to an editing operation, editing the flow information and updating the second image; and in response to a special effect prop generation operation, generating a target special effect prop according to edited target prop flow information.Type: ApplicationFiled: November 22, 2022Publication date: July 13, 2023Inventor: Zeng DAI
-
Publication number: 20230086327Abstract: Systems and methods are disclosed for identifying target graphs that have nodes or neighborhoods of nodes (sub-graphs) that correspond with an input query graph. A visual analytics system supports human-in-the-loop, example-based subgraph pattern search utilizing a database of target graphs. Users can interactively select a pattern of nodes of interest. Graph neural networks encode topological and node attributes in a graph as fixed length latent vector representations such that subgraph matching can be performed in the latent space. Once matching target graphs are identified as corresponding to the query graph, one-to-one node correspondence between the query graph and the matching target graphs.Type: ApplicationFiled: September 17, 2021Publication date: March 23, 2023Inventors: Huan SONG, Zeng DAI, Panpan XU, Liu REN
-
Publication number: 20230089148Abstract: Methods and systems for providing an interactive image scene graph pattern search are provided. A user is provide with an image having a plurality of selectable segmented regions therein. The user selects one or more of the segmented regions to build a query graph. Via a graph neural network, matching target graphs are retrieved that contain the query graph from a target graph database. Each matching target graph has matching target nodes that match with the query nodes of the query graph. Matching target images from an image database are associated with the matching target graphs. Embeddings of each of the query nodes and the matching target nodes are extracted. A comparison of the embeddings of each query node with the embeddings of each matching target node is performed. The user interface displays the matching target images that are associated with the matching target graphs.Type: ApplicationFiled: September 17, 2021Publication date: March 23, 2023Inventors: Zeng DAI, Huan SONG, Panpan XU, Liu REN
-
Patent number: 11500470Abstract: A helmet includes a transceiver configured to receive vehicle data from one or more sensors located on a vehicle, an inertial measurement unit (IMU) configured to collect helmet motion data of the helmet associated with a rider of the vehicle, and a processor in communication with the transceiver and IMU, and programmed to receive, via the transceiver, vehicle data from the one or more sensors located on the vehicle, determine a gesture in response to the vehicle data from the one or more sensors located on the vehicle and the helmet motion data from the IMU, and output on a display of the helmet a status interface related to the vehicle, in response to the gesture.Type: GrantFiled: December 23, 2019Date of Patent: November 15, 2022Inventors: Benzun Pious Wisely Babu, Zeng Dai, Shabnam Ghaffarzadegan, Liu Ren
-
Patent number: 11373356Abstract: A method for generating graphics of a three-dimensional (3D) virtual environment includes: receiving, with a processor, a first camera position in the 3D virtual environment and a first viewing direction in the 3D virtual environment; receiving, with the processor, weather data including first precipitation information corresponding to a first geographic region corresponding to the first camera position in the 3D virtual environment; defining, with the processor, a bounding geometry at first position that is a first distance from the first camera position in the first viewing direction, the bounding geometry being dimensioned so as to cover a field of view from the first camera position in the first viewing direction; and rendering, with the processor, a 3D particle system in the 3D virtual environment depicting precipitation only within the bounding geometry, the 3D particle system having features depending on the first precipitation information.Type: GrantFiled: March 28, 2018Date of Patent: June 28, 2022Assignee: Robert Bosch GmbHInventors: Zeng Dai, Liu Ren, Lincan Zou
-
Publication number: 20220138511Abstract: A method may include receiving a set of images, analyzing the images, selecting an internal layer, extracting neuron activations, factorizing the neuron activations via a matrix factorization algorithm to select prototypes and generate weights for each of the selected prototypes, replacing the neuron activations of the internal layer with the selected prototypes and the weights for the selected prototypes, receiving a second set of images, classifying the second set of images using the prototypes and weights, displaying the second set of images, selected prototypes, and weights, displaying predicted results and ground truth for the second set of images, providing error images based on the predicted results and ground truth; identifying error prototypes of the selected prototypes associated with the error images; ranking error weights of the error prototypes, and outputting a new image class based on the error prototypes being one of a top ranked error weights.Type: ApplicationFiled: October 25, 2021Publication date: May 5, 2022Inventors: Panpan XU, Liu REN, Zeng DAI, Junhan ZHAO
-
Publication number: 20220138510Abstract: A method to interpret a deep neural network that includes receiving a set of images, analyzing the set of images via a deep neural network, selecting an internal layer of the deep neural network, extracting neuron activations at the internal layer, factorizing the neuron activations via a matrix factorization algorithm to select prototypes and generate weights for each of the selected prototypes, replacing the neuron activations of the internal layer with selected prototypes and weights for each of the selected prototypes, receiving a second set of images, and classifying the second set of images via the deep neural network using the weighted prototypes without the internal layer.Type: ApplicationFiled: October 25, 2021Publication date: May 5, 2022Inventors: Zeng DAI, Panpan XU, Liu REN, Subhajit DAS
-
Publication number: 20210195981Abstract: A helmet includes one or more sensors located in the helmet and configured to obtain cognitive-load data indicating a cognitive load of a rider of a vehicle, a wireless transceiver in communication with the vehicle, a controller in communication with the one or more sensors and the wireless transceiver, wherein the controller is configured to determine a cognitive load of the occupant utilizing at least the cognitive-load data and send a wireless command to the vehicle utilizing the wireless transceiver to execute commands to adjust a driver assistance function when the cognitive load is above a threshold.Type: ApplicationFiled: December 27, 2019Publication date: July 1, 2021Inventors: Shabnam GHAFFARZADEGAN, Benzun Pious Wisely BABU, Zeng DAI, Liu REN
-
Publication number: 20210201854Abstract: A smart helmet includes a heads-up display (HUD) configured to output graphical images within a virtual field of view on a visor of the smart helmet. A transceiver is configured to communicate with a mobile device of a user. A processor is programmed to receive, via the transceiver, calibration data from the mobile device that relates to one or more captured images from a camera on the mobile device, and alter the virtual field of view of the HUD based on the calibration data. This allows a user to calibrate his/her HUD of the smart helmet based on images received from the user's mobile device.Type: ApplicationFiled: December 27, 2019Publication date: July 1, 2021Inventors: Benzun Pious Wisely BABU, Zeng DAI, Shabnam GHAFFARZADEGAN, Liu REN
-
Publication number: 20210191518Abstract: A helmet includes a transceiver configured to receive vehicle data from one or more sensors located on a vehicle, an inertial measurement unit (IMU) configured to collect helmet motion data of the helmet associated with a rider of the vehicle, and a processor in communication with the transceiver and IMU, and programmed to receive, via the transceiver, vehicle data from the one or more sensors located on the vehicle, determine a gesture in response to the vehicle data from the one or more sensors located on the vehicle and the helmet motion data from the IMU, and output on a display of the helmet a status interface related to the vehicle, in response to the gesture.Type: ApplicationFiled: December 23, 2019Publication date: June 24, 2021Inventors: Benzun Pious Wisely BABU, Zeng DAI, Shabnam GHAFFARZADEGAN, Liu REN
-
Patent number: 10959479Abstract: A system for providing a rider of a saddle-ride vehicle, such as a motorcycle, with information about helmet usage is provided. A camera is mounted to the saddle-ride vehicle and faces the rider and monitor a rider of the vehicle and collect rider image data. A GPS system is configured to detect a location of the saddle-ride vehicle. A controller is in communication with the camera and the GPS system. The controller is configured to receive an image of the ruder from the camera, determine if the rider is wearing a helmet based on the rider image data, and output a helmet-worn indicator to the rider, in which the helmet-worn indicator varies based on the determined location of the saddle-ride vehicle.Type: GrantFiled: December 27, 2019Date of Patent: March 30, 2021Assignee: Robert Bosch GmbHInventors: Benzun Pious Wisely Babu, Zeng Dai, Shabnam Ghaffarzadegan, Liu Ren
-
Patent number: 10901119Abstract: A method for generating graphics of a three-dimensional (3D) virtual environment includes: receiving, with a processor, weather data corresponding to a geographic region, the weather data including a sequence of precipitation intensity values, each precipitation intensity value being associated with a respective timestamp of a chronological sequence of timestamps; calculating, with the processor, a first precipitation accumulation value based on the sequence of precipitation intensity values, the first precipitation accumulation value corresponding to a first time; and rendering, with the processor, a depiction of accumulated precipitation in the 3D virtual environment, the depiction of accumulated precipitation depending on the first precipitation accumulation value.Type: GrantFiled: March 28, 2018Date of Patent: January 26, 2021Assignee: Robert Bosch GmbHInventors: Zeng Dai, Liu Ren, Lincan Zou
-
Patent number: 10553025Abstract: A method and device for determining a footprint of a 3D structure are disclosed. The method includes: receiving mesh data for the 3D structure that includes vertices and edges that form polygons of the 3D structure; determining a connection graph including candidate nodes and candidate lines by (i) identifying all edges having a vertex that is less than threshold height value, and (ii) mapping the identified edges and vertices thereof onto a 2D plane; determining an adjacency list that indicates, for each candidate node, which other candidate nodes are connected to the candidate node by a candidate line; and generating a footprint of the 3D structure based on the connection graph and the adjacency list, the footprint including vertices corresponding to a selection of the candidate nodes and including edges corresponding to a selection of the candidate lines.Type: GrantFiled: March 14, 2018Date of Patent: February 4, 2020Assignee: Robert Bosch GmbHInventors: Lincan Zou, Liu Ren, Zeng Dai, Cheng Zhang
-
Method and system for efficient rendering of cloud weather effect graphics in three-dimensional maps
Patent number: 10535180Abstract: A method for displaying graphics of clouds in a three-dimensional (3D) virtual environment includes generating a filtered texture based on a threshold filter applied to a cloud texture where the filter threshold corresponds to cloud coverage information in weather data of a geographic region. The method further includes mapping the filtered texture to a geometric surface corresponding to a sky dome in the 3D virtual environment, coloring a plurality of texels in the mapped filtered texture on the geometric surface stored in the memory based on an isotropic single-scatter color model, and generating a graphical depiction of the 3D virtual environment including at least a portion of the geometric surface corresponding to the sky dome with clouds based on the plurality of texels of the filtered texture that are colored and mapped to the geometric surface.Type: GrantFiled: March 28, 2018Date of Patent: January 14, 2020Assignee: Robert Bosch GmbHInventors: Zeng Dai, Liu Ren, Lincan Zou -
Patent number: 10496252Abstract: A system and method for providing a location based interactive informational display includes processing circuitry outputting on a display device a map of a region represented with a first level of detail and including a location focusing graphical indicia overlaid on a sub-region of the map, outputting on the display device a details frame that includes information of the sub-region with a second level of detail that is higher than the first level of detail, receiving user input for modifying the sub-region on which the location focusing graphical indicia is overlaid, and, responsive to the user input, modifying the display of the location focusing graphical indicia and modifying the details frame to include information at the second level of detail corresponding to the modified sub-region.Type: GrantFiled: January 5, 2017Date of Patent: December 3, 2019Assignee: Robert Bosch GmbHInventors: Liu Ren, Zeng Dai
-
Method and System for Efficient Rendering of Cloud Weather Effect Graphics in Three-Dimensional Maps
Publication number: 20190304159Abstract: A method for displaying graphics of clouds in a three-dimensional (3D) virtual environment includes generating a filtered texture based on a threshold filter applied to a cloud texture where the filter threshold corresponds to cloud coverage information in weather data of a geographic region. The method further includes mapping the filtered texture to a geometric surface corresponding to a sky dome in the 3D virtual environment, coloring a plurality of texels in the mapped filtered texture on the geometric surface stored in the memory based on an isotropic single-scatter color model, and generating a graphical depiction of the 3D virtual environment including at least a portion of the geometric surface corresponding to the sky dome with clouds based on the plurality of texels of the filtered texture that are colored and mapped to the geometric surface.Type: ApplicationFiled: March 28, 2018Publication date: October 3, 2019Inventors: Zeng Dai, Liu Ren, Lincan Zou