Animation Patents (Class 345/473)
-
Patent number: 11644941Abstract: In one implementation, a method of manipulating animation timing is provided by a device including one or more processors coupled to non-transitory memory. The method includes displaying, using a display, a timeline for an animation of an object moving along a path, wherein the timeline includes a plurality of ticks, wherein each of the plurality of ticks is associated with a respective distance along the timeline and a respective distance along the path, wherein the respective distance along the timeline is proportional to an amount of time for the object to move the respective distance along the path. The method includes receiving, using one or more input devices, an input within the timeline. The method includes, in response to receiving the input within the timeline, changing the respective distances along the timeline of two or more of the plurality of ticks.Type: GrantFiled: June 17, 2021Date of Patent: May 9, 2023Assignee: APPLE INC.Inventors: Karen Natalie Wong, James Graham McCarter, Jee Young Park
-
Patent number: 11645804Abstract: An animated emoticon generation method, a computer-readable storage medium, and a computer device are provided. The method includes: displaying an emoticon input panel on a chat page; detecting whether a video shooting event is triggered in the emoticon input panel; acquiring video data in response to detecting the video shooting event; obtaining an edit operation for the video data; processing video frames in the video data according to the edit operation to synthesize an animated emoticon; and adding an emoticon thumbnail corresponding to the animated emoticon to the emoticon input panel, the emoticon thumbnail displaying the animated emoticon to be used as a message on the chat page based on a user selecting the emoticon thumbnail in the emoticon input panel.Type: GrantFiled: September 9, 2019Date of Patent: May 9, 2023Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Dong Huang, Tian Yi Liang, Jia Wen Zhong, Jun Jie Zhou, Jin Jiang, Ying Qi, Si Kun Liang
-
Patent number: 11644902Abstract: A method can include determining, by a head-mounted device, that a user is looking at a first electronic device; determining that the user made a predefined gesture; determining content that was presented by the first electronic device when the user made the predefined gesture; and instructing a second electronic to present the content that was presented by the first electronic device when the user made the predefined gesture.Type: GrantFiled: November 30, 2020Date of Patent: May 9, 2023Assignee: GOOGLE LLCInventors: Ian Chen Zhang, Ricardo John Campbell
-
Patent number: 11642788Abstract: A system and method for detecting and fixing robotic process automation failures, including collecting tasks from at least one client computerized device, processing the tasks via robotic process automation, collecting tasks that failed to complete per task type, recording successful execution steps per each of the failed tasks, evaluating the recorded successful execution steps with respect to the failed task types, and providing selected execution steps that best fix the failed tasks, thereby fixing the robotic process automation failures.Type: GrantFiled: October 3, 2022Date of Patent: May 9, 2023Assignee: NICE LTD.Inventors: David Geffen, Lior Epstein, Gal Tesler
-
Patent number: 11645803Abstract: According to an embodiment, a source object presented in a source video is identified. Attribute information of the source object in respective frames of a sequence of source frames in the source video is identified. The attribute information represents an animation effect associated with the source object across the sequence of source frames. The attribute information is provided for use in reproducing the animation effect in a target video.Type: GrantFiled: August 7, 2020Date of Patent: May 9, 2023Assignee: International Business Machines CorporationInventors: Jian Jun Wang, Ting Chen, Shi Hui Gui, Li Yi Zhou, Jing Xia, Yidan Lei
-
Patent number: 11633669Abstract: Methods and apparatus are disclosed for video transmission. In one example, computer-readable storage media store instructions cause a processor to: generate first motion data; distribute, toward terminal devices of a plurality of viewers via a communication line, the first motion data; receive a web page; receive first operation data from a user interface; generate a second video related to a computer-implemented game on the basis of the first operation data by using the received web page; distribute the second video toward the terminal devices of the plurality of viewer; receive viewer data regarding a plurality of viewers; extract a selected game object out of a plurality of game objects to be used in the game; calculate a control parameter related to the selected game object on the basis of the viewer data; generate the second video including the selected game object; and distribute the second video toward the terminal devices.Type: GrantFiled: June 18, 2021Date of Patent: April 25, 2023Assignee: GREE, Inc.Inventor: Yosuke Kanaya
-
Patent number: 11635880Abstract: Systems, methods and computer-readable storage media that be used to configure an animated content item based on a position of the animated content item within a viewport of a computing device upon which the animated content item is presented. One method includes providing, to a first computing device, an animation configuration interface configured to allow selection via the first computing device of a position-dependent setting comprising a position within the viewport at which a property of the animated content item changes. The method further includes receiving, by a second computing device, the position-dependent setting and configuring the property of the animated content item based on the position-dependent setting such that the animated content item is configured to change the property when presented within a viewport of the second computing device at the position of the viewport in accordance with the position-dependent setting.Type: GrantFiled: October 23, 2019Date of Patent: April 25, 2023Assignee: GOOGLE LLCInventors: Nivesh Rajbhandari, Mariko Ogawa
-
Patent number: 11625878Abstract: Provided is a method of generating a three-dimensional (3D) avatar from a two-dimensional (2D) image. The method may include obtaining a 2D image by capturing a face of a person, detecting a landmark of the face in the obtained 2D image, generating a first mesh model by modeling a 3D geometrical structure of the face based on the detected landmark, extracting face texture information from the obtained 2D image, determining a second mesh model to be blended with the first mesh model in response to a user input, wherein the first mesh model and the second mesh model have the same mesh topology, generating a 3D avatar by blending the first mesh model and the second mesh model, and applying, to the 3D avatar, a visual expression corresponding to the extracted face texture information.Type: GrantFiled: June 29, 2020Date of Patent: April 11, 2023Assignee: Seerslab, Inc.Inventors: Jin Wook Chong, Jae Cheol Kim, Hyo Min Kim, Jun Hwan Jang
-
Patent number: 11627126Abstract: Aspects of the disclosure relate to simplified and expedited processing of access requests to network resources. Authorized individuals can set rules for accessing network resources. The rules can be implemented as a series of macro steps assigned to various access rights and can be consolidated in a single button or widget for a particular user group. In response to a user's one-click selection of the button or widget, all applicable access rights can be requested sequentially from appropriate services or individuals without requiring complex instructions or myriad user actions. User interfaces and API(s) are provided to enable users to request access and managers to setup access requirements and button configurations. Novel logical systems, architectures, platforms, graphical user interfaces, and methods are disclosed.Type: GrantFiled: August 20, 2020Date of Patent: April 11, 2023Assignee: Bank of America CorporationInventors: Tinku Thomas, Paul Joseph Harding, David Patrick Harte, Reuben Oliver Wells
-
Patent number: 11625904Abstract: A method for processing images can include: acquiring a hair region in a first image; determining a hair direction parameter of a pixel point in the hair region by a hair direction prediction model based on the hair region; converting the hair direction parameter of the pixel point into a hair direction of the pixel point; and generating a second image by processing the hair region in the first image based on the hair direction of the pixel point.Type: GrantFiled: March 5, 2021Date of Patent: April 11, 2023Assignee: BEIJING DAJIA INTERNET INFORMATION TECHNOLOGY CO., LTD.Inventor: Heng Zhang
-
Patent number: 11620984Abstract: A human-computer interaction method can include detecting a voice input, and determining whether a first detected voice includes a wake-up word, the wake-up word being intended to wake up an avatar in a social interaction client; displaying the avatar on a live streaming room interface provided by the social interaction client, in response to determining that the first detected voice includes the wake-up word; continuing to detect a voice input, and determining a recognition result by recognizing a second detected voice; determining a user intention based on the recognition result; and controlling, based on the user intention, the avatar to output feedback information.Type: GrantFiled: September 3, 2020Date of Patent: April 4, 2023Assignee: BEIJING DAJIA INTERNET INFORMATION TECHNOLOGY CO., LTD.Inventors: Meizhuo Li, Yuanyuan Zhao
-
Patent number: 11620781Abstract: A system and method for controlling the animation and movement of in-game objects. In some embodiments, the system includes one or more data-driven animation building blocks that can be used to define any character movements. In some embodiments, the data-driven animation blocks are conditioned by how their data is described separately from any explicit code in the core game engine. These building blocks can accept certain inputs from the core code system (e.g., movement direction, desired velocity of movement, and so on). But the game itself is agnostic as to why particular building blocks are used and what animation data (e.g., single animation, parametric blend, defined by user, and so on) the blocks may be associated with.Type: GrantFiled: October 23, 2020Date of Patent: April 4, 2023Assignee: TAKE-TWO INTERACTIVE SOFTWARE, INC.Inventors: Tobias Kleanthous, Mike Jones, Chris Swinhoe, Arran Cartie, James Stuart Miller, Sven Louis Julia van Soom
-
Patent number: 11620418Abstract: A design engine generates a configuration option that includes a specific arrangement of interconnected mechanical elements adhering to one or more design constraints. Each element within a given configuration option is defined by a set of design variables. The design engine implements a parametric optimizer to optimize the set of design variables associated with each configuration option. For a given configuration option, the parametric optimizer discretizes continuous equations governing the physical dynamics of the configuration. The parametric optimizer then determines the gradient of an objective function based on the discretized equations the gradient of objective and constraint functions based on discrete direct differentiation method or discrete adjoint variable method derived directly from the discretized motion equations. Then, the parametric optimizer traverses a design space where the configuration option resides to reduce improve the objective function, thereby optimizing the design variables.Type: GrantFiled: March 16, 2018Date of Patent: April 4, 2023Assignee: AUTODESK, INC.Inventors: Mehran Ebrahimi, Adrian Butscher, Hyunmin Cheong, Francesco Iorio
-
Patent number: 11615580Abstract: A method of generating a path of an object through a virtual environment is provided, the method comprising: receiving image data, at a first instance of time, from a plurality of image capture devices arranged in a physical environment; receiving image data, at an at least one second instance of time after the first instance of time, from a plurality of image capture devices arranged in the physical environment; detecting a location of a plurality of points associated with an object within the image data from each image capture device at the first instance of time and the at least one second instance of time; projecting the location of the plurality of points associated with the object within the image data from each image capture device at the first instance of time and the at least one second instance of time into a virtual environment to generate a location of the plurality of points associated with the object in the virtual environment at each instance of time; and generating a path of the object throughType: GrantFiled: September 7, 2021Date of Patent: March 28, 2023Assignees: SONY CORPORATION, SONY EUROPE B.V.Inventors: Anthony Daniels, William Brennan
-
Patent number: 11613012Abstract: A robot controller that moves a first workpiece mounted on a robot with respect to a second workpiece, the robot having a sensor for detecting one of magnitude of force acting on the first workpiece and magnitude of torque acting on the robot, the robot controller including a calculation unit configured to calculate a force between the first workpiece and the second workpiece and a moment on the first workpiece, based on the magnitude of the force or the torque, a controller carrying out force control so that the calculated force and the moment correspond to a predetermined force and moment, and a display displaying at least one of a velocity of the first workpiece and an angular velocity, the velocity and the angular velocity occurring as a result of control by the controller, the velocity and the angular velocity being overlapped on an image of the robot.Type: GrantFiled: July 23, 2020Date of Patent: March 28, 2023Assignee: FANUC CORPORATIONInventor: Takashi Satou
-
Patent number: 11615572Abstract: Systems and methods enabling rendering an avatar attuned to a user. The systems and methods include receiving audio-visual data of user communications of a user. Using the audio-visual data, the systems and methods may determine vocal characteristics of the user, facial action units representative of facial features of the user, and speech of the user based on a speech recognition model and/or natural language understanding model. Based on the vocal characteristics, an acoustic emotion metric can be determined. Based on the speech recognition data, a speech emotion metric may be determined. Based on the facial action units, a facial emotion metric may be determined. An emotional complex signature may be determined to represent an emotional state of the user for rendering the avatar attuned to the emotional state based on a combination of the acoustic emotion metric, the speech emotion metric and the facial emotion metric.Type: GrantFiled: October 3, 2022Date of Patent: March 28, 2023Assignee: Attune Media Labs, PBCInventors: Robert E. Bosnak, David E. Bosnak, Albert Rizzo
-
Patent number: 11615713Abstract: Cognitive and mood states of a real world person are assessed according to activity in a virtual world environment with which the person interacts. The virtual world is configured to provide interactive experiences for assessing the person's cognitive and/or mood states. The system requires configuration of a session avatar during each virtual world session to provide then-current insight into the person's mood state. The system may require configuration of an avatar reflective of the person's state. The system requires the person to configure the virtual world environment during each virtual session to provide then-current insight into the person's mood state. The system permits the user to visit destinations, perform tasks and play games that are included in the environment for the purpose of providing insight into the person's cognitive and/or mood states according to the person's selections and/or performance.Type: GrantFiled: May 25, 2017Date of Patent: March 28, 2023Assignee: Janssen Pharmaceutica NVInventors: Joseph Barbuto, Katherine Bettencourt, Carine Brouillon, Gabriel Brun, Alexandra Kramer, Joe Manfredonia, Husseini Manji, Kenneth Mosca, Mark Sapp, Magdalena Schoeneich
-
Patent number: 11616745Abstract: Among other things, embodiments of the present disclosure improve the functionality of electronic messaging software and systems by generating and selecting customized media content items (such as images) with avatars of different users within electronic messages based on the context of communications between the users. For example, users of different mobile computing devices can exchange electronic communications with each other, and the system can analyze these communications to present options for media content items containing the users' avatars based on content in the communications, actions or events taken by or involving the users, or combinations thereof. The users may select such media content items for inclusion in their electronic communications.Type: GrantFiled: April 12, 2017Date of Patent: March 28, 2023Assignee: Snap Inc.Inventors: Jacob Edward Blackstock, Matthew Colin Grantham, Jason Bernard Innis
-
Patent number: 11607799Abstract: An unmanned ground vehicle includes a main body, a drive system supported by the main body, a manipulator arm pivotally coupled to the main body, and a sensor module. The drive system includes right and left driven track assemblies mounted on right and left sides of the main body. The manipulator arm includes a first link coupled to the main body, an elbow coupled to the first link, and a second link coupled to the elbow. The elbow is configured to rotate independently of the first and second links. The sensor module is mounted on the elbow.Type: GrantFiled: November 23, 2020Date of Patent: March 21, 2023Assignee: Teledyne FLIR Detection, Inc.Inventors: Annan Michael Mozeika, Mark Robert Claffee
-
Patent number: 11610609Abstract: An enhanced video book and a system and method for creating an enhanced video book are described. Artwork and text corresponding to a storyline can be converted into a format that can be animated. A timing is established at which the converted artwork can be displayed, at a pace corresponding to the timing at which the converted text can be read. The converted artwork and/or the converted text are animating, and voice-over narration corresponding to the converted text is generated. The display of the converted artwork or the converted text is adjusted and synchronized with the voice-over narration based on the timing at which the converted artwork can be displayed. Audio is added and synchronized to the converted artwork. The converted artwork, the converted text, the animated or converted artwork, the animated converted text, the voice-over narration, and the audio are combined into an enhanced video book.Type: GrantFiled: November 20, 2020Date of Patent: March 21, 2023Assignee: Vooks, Inc.Inventors: Russell Powell Hirtzel, Marshall Bex, IV
-
Patent number: 11605196Abstract: Provided are a method and device for providing interactive virtual reality content capable of increasing user immersion by naturally connecting an idle image to a branched image. The method includes providing an idle image including options, wherein an actor in the idle image performs a standby operation, while the actor performs the standby operation, receiving a user selection for an option, providing a connection image, and providing a corresponding branched image according to the selection of the user, wherein a portion of the actor in the connection image is processed by computer graphics, and the actor performs a connection operation so that a first posture of the actor at a time point at which the selection is received is smoothly connected to a second posture of the actor at a start time point of the branched image.Type: GrantFiled: April 20, 2021Date of Patent: March 14, 2023Assignees: VISION VR INC.Inventors: Dong Kyu Kim, Won-Il Kim
-
Patent number: 11605202Abstract: Embodiments of the invention are directed to a computer-implemented method of generating a pathway recommendation. The computer-implemented method includes using a processor system to generate an intermediate three-dimensional (3D) virtual reality (VR) environment of a target environment. A machine learning algorithm is used to perform a machine learning task on the intermediate 3D VR environment to generate machine learning task results including predicted features of interest (FOI) and FOI annotations for the intermediate 3D VR environment. The processor system is used to generate, based at least in part on the machine learning task results, the pathway recommendation configured to assist a user with navigating and interpreting a 3D VR environment including the intermediate 3D VR environment having the pathway recommendation.Type: GrantFiled: December 11, 2020Date of Patent: March 14, 2023Assignee: International Business Machines CorporationInventors: Wallas Henrique Sousa Dos Santos, Emilio Ashton Vital Brazil, Vagner Figueredo de Santana, Marcio Ferreira Moreno, Renato Fontoura de Gusmao Cerqueira
-
Patent number: 11600007Abstract: Certain aspects of the present disclosure are directed to methods and apparatus for predicting subject motion using probabilistic models. One example method generally includes receiving training data comprising a set of subject pose trees. The set of subject pose trees comprises a plurality of subsets of subject pose trees associated with an image in a sequence of images, and each subject pose tree in the subset indicates a location along an axis of the image at which each of a plurality of joints of a subject is located. The received training data may be processed in a convolutional neural network to generate a trained probabilistic model for predicting joint distribution and subject motion based on density estimation. The trained probabilistic model may be deployed to a computer vision system and configured to generate a probability distribution for the location of each joint along the axis.Type: GrantFiled: February 25, 2021Date of Patent: March 7, 2023Assignee: Qualcomm IncorporatedInventors: Mohammad Sadegh Ali Akbarian, Amirhossein Habibian, Koen Erik Adriaan Van De Sande
-
Patent number: 11600013Abstract: Tracking units for facial features with advanced training for natural rendering of human faces in real-time are provided. An example device receives a video stream, and upon detecting a visual face, selects a 3D model from a comprehensive set of head orientation classes. The device determines modifications to the selected 3D model to describe the face, then projects a 2D model of tracking points of facial features based on the 3D model, and controls, actuates, or animates hardware based on the facial features tracking points. The device can switch among an example comprehensive set of 35 different head orientation classes for each video frame, based on suggestions computed from a previous video frame or from yaw and pitch angles of the visual head orientation. Each class of the comprehensive set is trained separately based on a respective collection of automatically marked images for that head orientation class.Type: GrantFiled: July 6, 2020Date of Patent: March 7, 2023Inventors: Mihai Ciuc, Stefan Petrescu, Emanuela Haller, Florin Oprea, Alexandru Nicolaescu, Florin Nanu, Iulian Palade
-
Patent number: 11599188Abstract: A system configured to generate and/or modify three-dimensional scenes comprising animated character(s) based on individual asynchronous motion capture recordings. The system may comprise sensor(s), display(s), and/or processor(s). The system may receive selection of a first character to virtually embody within the virtual space, receive a first request to capture the motion and/or the sound for the first character, and/or record first motion capture information characterizing the motion and/or the sound made by the first user as the first user virtually embodies the first character. The system may receive selection of a second character to virtually embody, receive a second request to capture the motion and/or the sound for the second character, and/or record second motion capture information. The system may generate a compiled virtual reality scene wherein the first character and the second character appear animated within the compiled virtual reality scene contemporaneously.Type: GrantFiled: November 19, 2021Date of Patent: March 7, 2023Assignee: Mindshow Inc.Inventors: Jonathan Michael Ross, Gil Baron
-
Patent number: 11592896Abstract: The present disclosure describes techniques for generating, maintaining, and operating a cooperative virtual reality (VR) environment across multiple computing devices. By utilizing these techniques, disparate users are able to work independently or in collaboration on projects within a single VR environment without the latency issues that plague prior VR environments. That is, unlike prior systems which are plagued with latency issues that interrupt the user's VR experience, the techniques described in the present disclosure allow for cooperative VR environments to be rendered in real time across large numbers of computing devices while enabling each computing device to provide a smooth user experience. Additionally, the techniques described herein distribute the data processing and analysis between the VR server and the individual computing devices rendering the cooperative VR environment.Type: GrantFiled: November 6, 2019Date of Patent: February 28, 2023Assignee: Wild Technology, Inc.Inventors: Gabriel M. Paez, Eric S. Hackborn, Taylor A. Libonati, Adam G. Micciulla
-
Patent number: 11595508Abstract: A method of providing content in a terminal includes obtaining a first message that is input through a user interface of the terminal that is provided by a messaging application that executes a messaging service in the terminal; generating content based on the first message, and a second message stored in the terminal; and providing the generated content via the terminal.Type: GrantFiled: March 6, 2020Date of Patent: February 28, 2023Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventor: Young-jae Kim
-
Patent number: 11593967Abstract: A method for point cloud decoding includes receiving a bitstream. The method also includes decoding the bitstream into multiple frames that include pixels. Certain pixels of the multiple frames correspond to points of a three-dimensional (3D) point cloud. The multiple frames include a first set of frames that represent locations of the points of the 3D point cloud and a second set of frames that represent attribute information for the points of the 3D point cloud. The method further includes reconstructing the 3D point cloud based on the first set of frames. Additionally, the method includes identifying a first portion of the points of the reconstructed 3D point cloud based at least in part on a property associated with the multiple frames. The method also includes modifying a portion of the attribute information. The portion of the attribute information that is modified corresponds to the first portion of the points.Type: GrantFiled: December 10, 2020Date of Patent: February 28, 2023Assignee: Samsung Electronics Co., Ltd.Inventors: Hossein Najaf-Zadeh, Rajan Laxman Joshi, Madhukar Budagavi
-
Patent number: 11593871Abstract: Three-dimensional models (or avatars) may be defined based on imaging data captured from a customer. The avatars may be based on a virtual mannequin having one or more dimensions in common with the customer, a body template corresponding to the customer, or imaging data captured from the customer. The avatars are displayed on displays or in user interfaces and used for any purpose, such as to depict how clothing will appear or behave while being worn by a customer alone or with other clothing. Customers may drag-and-drop images of clothing onto the avatars. One or more of the avatars may be displayed on any display, such as a monitor or a virtual reality headset, which may depict the avatars in a static or dynamic mode. Images of avatars and clothing may be used to generate print catalogs depicting the appearance or behavior of the clothing while worn by the customer.Type: GrantFiled: December 21, 2020Date of Patent: February 28, 2023Assignee: Amazon Technologies, Inc.Inventors: Robert Yuji Haitani, William R. Hazlewood, Alaa-Eddine Mendili, Dominick Khanh Pham
-
Patent number: 11592303Abstract: A processing apparatus includes a control unit. The control unit is configured to acquire facility information containing an advertisement or publicity on a facility located along a travel route that a vehicle is scheduled to travel or a facility located within a predetermined range from the travel route, and, while the vehicle is traveling along the travel route, process an image of a first facility associated with the facility information or an image of a second facility present around the first facility based on the facility information and display the image of the first facility or the image of the second facility on a display provided in the vehicle.Type: GrantFiled: February 4, 2020Date of Patent: February 28, 2023Assignee: TOYOTA JIDOSHA KABUSHIKI KAISHAInventors: Kunihiro Murakami, Katsuhiko Sakakibara, Makoto Matsushita, Junya Sato, Kiyonori Yoshida, Tae Sugimura, Takashi Hayashi, Jun Endo
-
Patent number: 11587362Abstract: In one aspect, a device may include a processor and storage accessible to the processor. The storage may include instructions executable by the processor to receive at least one image that indicates a first gesture being made by a person using a hand-based sign language, with at least part of the first gesture extending out of the image frame of the image. The instructions may then be executable to provide the image to a gesture classifier and to receive plural candidate first text words for the first gesture from the gesture classifier. The instructions may then be executable to use at least a second text word correlated to a second gesture to select one of the candidate first text words, combine the second text word with the selected first text word to establish a text string, and provide the text string to an apparatus different from the device.Type: GrantFiled: December 16, 2020Date of Patent: February 21, 2023Assignee: Lenovo (Singapore) Pte. Ltd.Inventors: Jampierre Vieira Rocha, Jeniffer Lensk, Marcelo da Costa Ferreira
-
Patent number: 11588769Abstract: Among other things, embodiments of the present disclosure improve the functionality of electronic messaging software and systems by generating and selecting customized media content items (such as images) with avatars of different users within electronic messages based on the context of communications between the users. For example, users of different mobile computing devices can exchange electronic communications with each other, and the system can analyze these communications to present options for media content items containing the users' avatars based on content in the communications, actions or events taken by or involving the users, or combinations thereof. The users may select such media content items for inclusion in their electronic communications.Type: GrantFiled: April 12, 2017Date of Patent: February 21, 2023Assignee: Snap Inc.Inventors: Jacob Edward Blackstock, Matthew Colin Grantham, Jason Bernard Innis
-
Patent number: 11589033Abstract: This patent discloses a method record imagery in a way that is larger than a user could visualize. Then, allow the user to view naturally via head tracking and eye tracking to allow one to see and inspect a scene as if one were naturally there viewing it in real time. A smart system of analyzing the viewing parameters of a user and streaming of customized image to be displayed is also taught herein.Type: GrantFiled: April 22, 2021Date of Patent: February 21, 2023Inventors: Robert Edwin Douglas, Kathleen Mary Douglas, David Byron Douglas
-
Patent number: 11587561Abstract: A communication system is provided that generates human emotion metadata during language translation of verbal content. The communication system includes a media control unit that is coupled to a communication device and a translation server that receive verbal content from the communication device in a first language. An adapter layer having a plurality of filters determines emotion associated with the verbal content, wherein the adapter layer associates emotion metadata with the verbal content based on the determined emotion. The plurality of filters may include user-specific filters and non-user-specific filters. An emotion lexicon is provided that links an emotion value to the corresponding verbal content. The communication system may include a display that graphically displays emotions alongside the corresponding verbal content.Type: GrantFiled: October 25, 2019Date of Patent: February 21, 2023Inventor: Mary Lee Weir
-
Patent number: 11579766Abstract: A computing device can receive an interactive advertisement comprising a first content object and a second content object. The computing device can display the first content object corresponding to a collapsed version of the interactive advertisement. The computing device can receive a first action to activate the interactive advertisement. The computing device can provide for display, responsive to receiving the first action, a target object identifying a location on the display screen to which to move the first content object. The computing device can receive a second action to move the first content object towards the target object. The computing device can then provide for display, the second content object corresponding to an expanded version of the interactive ad on the display screen of the computing device.Type: GrantFiled: November 25, 2020Date of Patent: February 14, 2023Assignee: GOOGLE LLCInventors: Brian Scot Cohen, Lloyd Dee Thompson, Armen Mkrtchyan
-
Patent number: 11582424Abstract: A system and method for an interactive digitally rendered avatar of a subject person to participate in a web meeting is described. In one embodiment, the method includes receiving an invite to a web meeting on a video conferencing platform, wherein the invite identifies a subject person and the video conferencing platform. The method also includes generating an interactive avatar of the subject person based on a data collection associated with the subject person stored in a database. The method further includes instantiating a platform integrator associated with the video conferencing platform identified in the invite and joining, by the interactive avatar of the subject person, the web meeting on the video conferencing platform. The platform integrator transforms outputs and inputs between the video conferencing platform and an interactive digitally rendered avatar system so that the interactive avatar of the subject person participates in the web meeting.Type: GrantFiled: March 31, 2022Date of Patent: February 14, 2023Assignee: KNOW SYSTEMS CORP.Inventor: Michael E. Kasaba
-
Patent number: 11575882Abstract: An electronic apparatus includes a stacked display including a plurality of panels, and a processor configured to obtain first light field (LF) images of different viewpoints, input the obtained first LF images to an artificial intelligence model for converting an LF image into a layer stack, to obtain a plurality of layer stacks to which a plurality of shifting parameters indicating depth information in the first LF images are respectively applied, and control the stacked display to sequentially and repeatedly display, on the stacked display, the obtained plurality of layer stacks. The artificial intelligence model is trained by applying the plurality of shifting parameters that are obtained based on the depth information in the first LF images.Type: GrantFiled: December 28, 2020Date of Patent: February 7, 2023Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Bora Jin, Yeoul Lee, Jihye Lee, Kyungmin Lim, Jaesung Lee
-
Patent number: 11574425Abstract: Various embodiments of the present invention relate to a method for displaying a stylus pen input, and an electronic device for same, the electronic device including: a touch screen display; a wireless communication circuit; processors operatively connected to the touch screen display and the wireless communication circuit; and a memory operatively connected to the processor. The memory may store instructions which, when executed, cause at least one of the processors to: display a user interface on the touch screen display; receive a drawing input that has at least one drawing path formed with a stylus pen or part of a user's body through the user interface; and display a drawing output on the user interface.Type: GrantFiled: August 2, 2019Date of Patent: February 7, 2023Assignee: Samsung Electronics Co., Ltd.Inventors: Keunsoo Kim, Banghyun Kwon, Jeonghoon Kim, Junyoung Kim, Nahyeong Park, Jinwan An, Jiwoo Lee
-
Patent number: 11573679Abstract: Methods of real-time emoji and emoticon production are disclosed that include: determining, by a computing device, at least one first emotional state of a user from information, wherein the at least one first emotional state is a presently-identified emotional state of the user; providing an emoji or emoticon production template system, wherein the template system includes at least one physical attribute of the user; and utilizing the emoji or emoticon production template system to: analyze the presently-identified emotional state of the user; determine a suitable map of the presently-identified state of the user; map the presently-identified state of the user on an emoji or emoticon production template; produce at least one unique emoji or emoticon based on the map; provide the at least one unique emoji or emoticon to the user, wherein the user selects the at least one unique emoji or emoticon and includes the at least one unique emoji or emoticon in a text message, a direct message, an electronic mail messaType: GrantFiled: April 30, 2019Date of Patent: February 7, 2023Assignee: The Trustees of the California State UniversityInventors: Li Liu, Dragos Guta
-
Patent number: 11568601Abstract: Technologies are provided herein for modeling and tracking physical objects, such as human hands, within a field of view of a depth sensor. A sphere-mesh model of the physical object can be created and used to track the physical object in real-time. The sphere-mesh model comprises an explicit skeletal mesh and an implicit convolution surface generated based on the skeletal mesh. The skeletal mesh parameterizes the convolution surface and distances between points in data frames received from the depth sensor and the sphere-mesh model can be efficiently determined using the skeletal mesh. The sphere-mesh model can be automatically calibrated by dynamically adjusting positions and associated radii of vertices in the skeletal mesh to fit the convolution surface to a particular physical object.Type: GrantFiled: August 14, 2017Date of Patent: January 31, 2023Assignee: UVic Industry Partnerships Inc.Inventors: Andrea Tagliasacchi, Anastasia Tkach, Mark Pauly
-
Patent number: 11568645Abstract: An electronic device and a controlling method thereof are provided. A controlling method of an electronic device according to the disclosure includes: performing first learning for a neural network model for acquiring a video sequence including a talking head of a random user based on a plurality of learning video sequences including talking heads of a plurality of users, performing second learning for fine-tuning the neural network model based on at least one image including a talking head of a first user different from the plurality of users and first landmark information included in the at least one image, and acquiring a first video sequence including the talking head of the first user based on the at least one image and pre-stored second landmark information using the neural network model for which the first learning and the second learning were performed.Type: GrantFiled: March 19, 2020Date of Patent: January 31, 2023Assignee: Samsung Electronics Co., Ltd.Inventors: Victor Sergeevich Lempitsky, Aliaksandra Petrovna Shysheya, Egor Olegovich Zakharov, Egor Andreevich Burkov
-
Patent number: 11570418Abstract: Techniques for efficiently generating and displaying light-field data are disclosed. In one particular embodiment, the techniques may be realized as a method for generating light-field data, the method comprising receiving input image data, synthesizing a first plurality of viewpoints based on the input image data, synthesizing a second plurality of viewpoints based on cached image data, combining the first and second plurality of viewpoints, yielding a plurality of blended viewpoints, displaying the plurality of blended viewpoints, and caching image data associated with the plurality of blended viewpoints.Type: GrantFiled: June 17, 2021Date of Patent: January 31, 2023Assignee: CREAL SAInventor: Grégoire André Joseph Hirt
-
Patent number: 11568864Abstract: A computing system for generating image data representing a speaker's face includes a detection device configured to route data representing a voice signal to one or more processors and a data processing device comprising the one or more processors configured to generate a representation of a speaker that generated the voice signal in response to receiving the voice signal. The data processing device executes a voice embedding function to generate a feature vector from the voice signal representing one or more signal features of the voice signal, maps a signal feature of the feature vector to a visual feature of the speaker by a modality transfer function specifying a relationship between the visual feature of the speaker and the signal feature of the feature vector; and generates a visual representation of at least a portion of the speaker based on the mapping, the visual representation comprising the visual feature.Type: GrantFiled: August 13, 2019Date of Patent: January 31, 2023Assignee: Carnegie Mellon UniversityInventor: Rita Singh
-
Patent number: 11562520Abstract: Provided is a method for controlling avatar motion, which is operated in a user terminal and includes receiving an input audio by an audio sensor, and controlling, by one and more processors, a motion of a first user avatar based on the input audio.Type: GrantFiled: March 17, 2021Date of Patent: January 24, 2023Assignee: LINE PLUS CORPORATIONInventor: Yunji Lee
-
Patent number: 11563998Abstract: One aspect of the invention relates to a video distribution system for live distributing a video containing a virtual space and an animation of a character object generated based on a motion of a distributor user. The video distribution system determines, when receiving from a viewer user watching the video a participation request to request participation in the video, which one of first and second groups the viewer user belongs to. If the viewer user is determined to belong to the first group, the video distribution system arranges a specific avatar of the viewer user in a first region within the virtual space. If the viewer user is determined to belong to the second group, the video distribution system arranges the specific avatar in a second region within the virtual space as long as a condition for participation is satisfied.Type: GrantFiled: December 24, 2019Date of Patent: January 24, 2023Assignee: GREE, INC.Inventors: Naohide Otsuka, Takashi Yasukawa, Yousuke Yamanouchi, Yasunori Kurita
-
Patent number: 11562025Abstract: A resource dependency system displays two dynamically interactive interfaces in a resource dependency user interface, a hierarchical resource repository and a dependency graph user interface. User interactions on each interface can dynamically update either interface. For example, a selection of a particular resource in the dependency graph user interface causes the system to update the dependency graph user interface to indicate the selection and also updates the hierarchical resource repository to navigate to the appropriate folder corresponding to the stored location of the selected resource. In another example, a selection of a particular resource in the hierarchical resource repository causes the system to update the hierarchical resource repository to indicate the selection and also updates the dependency graph user interface to display an updated graph, indicate the selection and, in some embodiments, focus on the selected resource by zooming into a portion of the graph.Type: GrantFiled: May 10, 2021Date of Patent: January 24, 2023Assignee: Palantir Technologies Inc.Inventors: Adam Borochoff, Joseph Rafidi, Parvathy Menon
-
Patent number: 11562505Abstract: This invention provides a system and method for displaying color match information on an acquired image of an object. A model/pattern having a plurality of color test points at locations of stable color is provided. A display process generates visible geometric shapes with respect to the color test points in a predetermined color. An alignment process aligns features of the object with respect to features on the model so that the geometric shapes appear in locations on the object that correspond to locations on the model. The geometric shapes can comprise closed shapes that surround a region expected to be stable color on the object. Such shapes can define circles, squares, diamonds or any other acceptable closed or open shape that is visible to the user on the display.Type: GrantFiled: March 9, 2019Date of Patent: January 24, 2023Assignee: Cognex CorporationInventors: Jason Davis, Zihan Hans Liu, Nathaniel R. Bogan
-
Patent number: 11553828Abstract: A diagnosis support apparatus performs identification for a plurality of support items, which are identification classifications about diagnosis support, and the diagnosis support apparatus is provided with a processor. The processor performs analysis processing for acquiring analysis results including an analysis result about an observation mode by analyzing at least one of an input signal specifying the observation mode and an observation image obtained by observing an inside of a subject with an endoscope; performs support item setting processing for setting a support item corresponding to the analysis results obtained by the analysis processing, among the plurality of support items, which are the identification classifications; and generates diagnosis support information, which is information used for diagnosis of a legion candidate area included in the observation image, based on an identification index corresponding to the set support item and the observation image.Type: GrantFiled: January 17, 2020Date of Patent: January 17, 2023Assignee: OLYMPUS CORPORATIONInventors: Takashi Kono, Makoto Kitamura, Hirokazu Godo, Toshiya Kamiyama, Katsuyoshi Taniguchi, Yamato Kanda
-
Patent number: 11551393Abstract: Systems and methods for animating from audio in accordance with embodiments of the invention are illustrated. One embodiment includes a method for generating animation from audio. The method includes steps for receiving input audio data, generating an embedding for the input audio data, and generating several predictions for several tasks from the generated embedding. The several predictions includes at least one of blendshape weights, event detection, and/or voice activity detection. The method includes steps for generating a final prediction from the several predictions, where the final prediction includes a set of blendshape weights, and generating an output based on the generated final prediction.Type: GrantFiled: July 23, 2020Date of Patent: January 10, 2023Assignee: LoomAi, Inc.Inventors: Chong Shang, Eloi Henri Homere Du Bois, Inaki Navarro, Will Welch, Rishabh Battulwar, Ian Sachs, Vivek Verma, Kiran Bhat
-
Patent number: 11544886Abstract: In one embodiment, a method includes, by one or more computing systems: receiving one or more non-video inputs, where the one or more non-video inputs include at least one of a text input, an audio input, or an expression input, accessing a K-NN graph including several sets of nodes, where each set of nodes corresponds to a particular semantic context out of several semantic contexts, determining one or more actions to be performed by a digital avatar based on the one or more identified semantic contexts, generating, in real-time in response to receiving the one or more non-video inputs and based on the determined one or more actions, a video output of the digital avatar including one or more human characteristics corresponding to the one or more identified semantic contexts, and sending, to a client device, instructions to present the video output of the digital avatar.Type: GrantFiled: December 16, 2020Date of Patent: January 3, 2023Inventors: Abhijit Z Bendale, Pranav K Mistry