Patents by Inventor NITIN VATS

NITIN VATS has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11783524
    Abstract: A method for providing visual sequences using one or more images comprising: receiving one or more person images of showing at least one face, receiving a message to be enacted by the person, wherein the message comprises at least a text or a emotional and movement command, processing the message to extract or receive an audio data related to voice of the person, and a facial movement data related to expression to be carried on face of the person, processing the image/s, the audio data, and the facial movement data, and generating an animation of the person enacting the message. Wherein emotional and movement command is a GUI or multimedia based instruction to invoke the generation of facial expression/s and or body part/s movement.
    Type: Grant
    Filed: February 10, 2017
    Date of Patent: October 10, 2023
    Inventor: Nitin Vats
  • Patent number: 11736756
    Abstract: A method for providing visual sequences using one or more images comprising: receiving one or more person images of showing atleast one face, using a human body information to identify requirement of the other body part/s; receiving atleast one image or photograph of other human body part/s based on identified requirement; processing the image/s of the person with the image/s of other human body part/s using the human body information to generate a body model of the person, the virtual model comprises face of the person, receiving a message to be enacted by the person, wherein the message comprises atleast a text or a emotional and movement command, processing the message to extract or receive an audio data related to voice of the person, and a facial movement data related to expression to be carried on face of the person, processing the body model, the audio data, and the facial movement data, and generating an animation of the body model of the person enacting the message, Wherein emotional and movement
    Type: Grant
    Filed: February 10, 2017
    Date of Patent: August 22, 2023
    Inventor: Nitin Vats
  • Patent number: 11450075
    Abstract: A method for generating a body model of a person wearing a cloth includes receiving an user input related to a person, wherein the user input comprises at least one image or photograph of the person, wherein at least one image of the person has face of the person, using a human body information to identify requirement of the other body part/s, receiving at least one image or photograph of other human body part/s based on identified requirement, processing the image/s of the person with the image/s or photograph/s of other human body part/s using the human body information to generate a body model of the person, wherein the body model represent the person whose image/photograph is received as user input, and the body model comprises face of the person, receiving an image of a cloth according to shape and size of the body model of the person, and combining the body model of the person and the image of the cloth to show the body model of the human wearing the cloth.
    Type: Grant
    Filed: January 27, 2017
    Date of Patent: September 20, 2022
    Inventor: Nitin Vats
  • Publication number: 20200104028
    Abstract: A method for realistically interacting with a 3D model of an object in 3D computer graphics environment, wherein the displayed 3D model is capable of performing user controlled interaction and having atleast one virtual interactive display mimicking an interactive display of the object, the method includes: receiving an input for interaction on 3D model if the interaction input is provided in a region of virtual interactive display, then the interaction input is applied to a graphical user interface of this virtual interactive display only, while the 3d model or its part/s will be unable to receive this input for interaction in this region whereas the virtual interactive display is in any orientation or perspective in synchronize with the 3d model; processing the interaction input and producing Corresponding change in multimedia on virtual interactive display, or performing user controlled interaction in 3d model or its part/s or change in multimedia on virtual interactive display, or combination thereof,
    Type: Application
    Filed: August 22, 2018
    Publication date: April 2, 2020
    Inventor: Nitin Vats
  • Publication number: 20200065559
    Abstract: A method for generating a video using user image/video includes providing an user image/video based on user input to a processor, the user image/video comprises a face, receiving a scene video which comprises a body model of a person whereas body model of a person represent an image of a human with specified face region, extracting the face image/s from user image/video, receiving a face position information in different frames of scene video, processing the extracted face image/s and the frames of the scene video using the face position information and generating a processed video, and the processed video comprises the face and the body model aligned together to represent a single person. The face region is the space for face with/without neck portion and/or hair in scene video frame. The face position information comprises at least one of tilt of face, orientation of face, geometrical location of face region, boundary of face region and zoom of the face region.
    Type: Application
    Filed: August 22, 2018
    Publication date: February 27, 2020
    Inventor: Nitin Vats
  • Patent number: 10497165
    Abstract: Texturing of external and/or internal surfaces, or on internal parts of 3D models representing real objects, for providing extremely real-like, vivid and detailed view on and/or within the 3D-model, is made possible using a plurality of real photographs and/or video of the real objects. The 3D models are 3D computer graphics models used in user-controlled interactions implementation purpose. The view of texture on the 3D-model that is textured using real photographs and/or video replicates view of texture as on the real 3D object. Displaying realistic texture on 3D-model surface applying video as texture is made possible replicating real view of light blinking from a physical light emitting device of real object such as head light or rear light of an automotive vehicle.
    Type: Grant
    Filed: March 15, 2014
    Date of Patent: December 3, 2019
    Inventors: Nitin Vats, Gaurav Vats
  • Publication number: 20190197755
    Abstract: A method for providing visual sequences using one or more images comprising: receiving one or more person images of showing at least one face, receiving a message to be enacted by the person, wherein the message comprises at least a text or a emotional and movement command, processing the message to extract or receive an audio data related to voice of the person, and a facial movement data related to expression to be carried on face of the person, processing the image/s, the audio data, and the facial movement data, and generating an animation of the person enacting the message. Wherein emotional and movement command is a GUI or multimedia based instruction to invoke the generation of facial expression/s and or body part's movement.
    Type: Application
    Filed: February 10, 2017
    Publication date: June 27, 2019
    Inventor: Nitin Vats
  • Publication number: 20190082211
    Abstract: A method for providing visual sequences using one or more images comprising: receiving one or more person images of showing atleast one face, using a human body information to identify requirement of the other body part/s; receiving atleast one image or photograph of other human body part/s based on identified requirement; processing the image/s of the person with the image/s of other human body part/s using the human body information to generate a body model of the, person, the virtual model comprises face of the person, receiving a message to be enacted by the person, wherein the message comprises atleast a text or a emotional and movement command, processing the message to extract or receive an audio data related to voice of the person, and a facial movement data related to expression to be carried on face of the person, processing the body model, the audio data, and the facial movement data, and generating an animation of the body model of the person enacting the message, Wherein emotional and movement c
    Type: Application
    Filed: February 10, 2017
    Publication date: March 14, 2019
    Inventor: Nitin VATS
  • Publication number: 20190045270
    Abstract: A method for realistically interacting with user profile on a social media network, the social media network represents a network of various user profiles owned by their users wherein the user profiles are connected to each other with various level of relationship or non-connected, and the user profile comprising an image having face of the user, the method includes: receiving a user request related to one of a user profile on the social media network, wherein the user request is for interacting with the user owning the user profile; analysing the user request and providing a displaying information from at least one of a user profile initial information or a user profile activity information, or combination thereof, based on the user request, wherein the displaying information is a video or animation showing the face of the user, wherein the user profile initial information is an information provided while creating the user profile on the social media network or updated in the user profile, wherein the
    Type: Application
    Filed: February 10, 2017
    Publication date: February 7, 2019
    Inventor: Nitin Vats
  • Publication number: 20190026954
    Abstract: A method for generating a body model of a person wearing a cloth includes receiving an user input related to a person, wherein the user input comprises at least one image or photograph of the person, wherein at least one image of the person has face of the person, using a human body information to identify requirement of the other body part/s, receiving at least one image or photograph of other human body part/s based on identified requirement, processing the image/s of the person with the image/s or photograph/s of other human body part/s using the human body information to generate a body model of the person, wherein the body model represent the person whose image/photograph is received as user input, and the body model comprises face of the person, receiving an image of a cloth according to shape and size of the body model of the person, and combining the body model of the person and the image of the cloth to show the body model of the human wearing the cloth.
    Type: Application
    Filed: January 27, 2017
    Publication date: January 24, 2019
    Inventor: Nitin Vats
  • Publication number: 20180240363
    Abstract: A computing device for visually impaired with a tactile refreshable Braille display, the display is adapted to show a computer application, the computer application is characterized by presence of one of command/control/tool button symbols and names and/or a reading material having at least one of headings names, hyperlink name, or compressed data name and detailed text or graphical GUI, the display is adapted to show the computer application by a combination of one or more of the following: a unique Braille symbol at different placing on tactile surface represents different command/control button symbols and names, headings names, hyperlink name or compressed data name, wherein meaning of the unique Braille symbol depends according to its placing on the tactile display. standard Braille character/s representing the detailed text, and standard Braille character/s at a first predefined region in the tactile display to show meaning of the unique Braille symbol according to placing on tactile surface.
    Type: Application
    Filed: August 14, 2015
    Publication date: August 23, 2018
    Inventor: Nitin Vats
  • Publication number: 20180239514
    Abstract: A method for providing interaction with a virtual object in a virtual space, the method includes providing a panoramic video of the virtual space, wherein one or more portion/s of one or more frames of the panoramic video are clickable, receiving an user input over at least one of the portions of at least one of the frames of the panoramic video, and loading a video or a 3 dimensional model of the virtual object which is predefined for the particular portion of the frame/s for which the user input is received.
    Type: Application
    Filed: August 14, 2015
    Publication date: August 23, 2018
    Inventor: Nitin Vats
  • Patent number: 9911243
    Abstract: A computer implemented method for visualization of a 3D model of an object, wherein the method includes: generating and displaying a first view of the 3D-model; receiving an user input, the user input are one or more interaction commands comprises interactions for customization of 3D model by at least one of adding, removing, replacing, scaling, or changing geometry, or combination thereof, of mechanical, electronic, digital, or pneumatic part/s of the 3D model by changing texture and/or graphics data of the part identifying one or more interaction commands; in response to the identified command/s, rendering of corresponding interaction to 3D model of object with or without sound output using texture data, computer graphics data and selectively using sound data of the 3D-model of object; and displaying the corresponding interaction to 3D-model.
    Type: Grant
    Filed: March 16, 2015
    Date of Patent: March 6, 2018
    Inventor: Nitin Vats
  • Publication number: 20180033210
    Abstract: A computer implemented method for visualization of a virtual model of an object over a cut-to-shape screen, wherein the screen dimensionally represents the virtual model of the object, such that an outer boundary of the virtual model either exactly aligned or nearly aligned to a boundary of the cut-to-shape screen, the method includes: generating and displaying a first view of the virtual model onto the cut-to-shape screen, such that the outer boundary of the three dimensional model either exactly aligned or nearly aligned to the boundary of the cut-to-shape screen; receiving an user input, the user input are one or more interaction commands, where each interaction command is provided for performing user-controlled interactions in real-time, wherein the interaction command is defined as input commands for performing different operations on different part/s of the virtual model to observe the virtual model and/or to experience functionality of virtual model and its part/s; identifying one or more interactio
    Type: Application
    Filed: January 23, 2015
    Publication date: February 1, 2018
    Inventor: Nitin Vats
  • Publication number: 20170237941
    Abstract: A method for videoconferencing includes steps of: receiving audio and video frames of multiple locations having at least one person at each location;—processing the video frames received from all the location except a base location, wherein processing the video frames to extract the person/s by removing background from the video frames of the location; merging the processed video frames with the base video to generate a merged video, so that the merged video give an impression of co-presence of the persons from all location at the location of the base video; and displaying the merged video.
    Type: Application
    Filed: August 14, 2015
    Publication date: August 17, 2017
    Inventor: NITIN VATS
  • Publication number: 20170124770
    Abstract: A computer implemented method for visualization of a 3D model of an object, the method includes: generating and displaying a first view of the 3D model; receiving an user input, the user input are one or more interaction commands comprises interactions for understanding particular functionality of the 3D model, wherein functionality of the 3D model is demonstrated by automatic operation of the part/s of the 3D model which operates in an ordered manner to perform the particular functionality; identifying one or more interaction commands; in response to the identified command/s, rendering of corresponding interaction to 3D model of object with or without sound output using texture data, computer graphics data and selectively using sound data of the 3D-model of object; and displaying the corresponding interaction to 3D model, wherein operating in Ordered manner includes parallel or sequential operation of part/s.
    Type: Application
    Filed: March 16, 2015
    Publication date: May 4, 2017
    Inventor: Nitin Vats
  • Publication number: 20170103584
    Abstract: A computer implemented method for visualization of a 3D model of an object, wherein the method includes: generating and displaying a first view of the 3D-model; receiving an user input, the user input are one or more interaction commands comprises interactions for customization of 3D model by at least one of adding, removing, replacing, scaling, or changing geometry, or combination thereof, of mechanical, electronic, digital, or pneumatic part/s of the 3D model by changing texture and/or graphics data of the part identifying one or more interaction commands; in response to the identified command/s, rendering of corresponding interaction to 3D model of object with or without sound output using texture data, computer graphics data and selectively using sound data of the 3D-model of object; and displaying the corresponding interaction to 3D-model.
    Type: Application
    Filed: March 16, 2015
    Publication date: April 13, 2017
    Inventor: Nitin VATS
  • Patent number: 9542067
    Abstract: Electronic panel system, an arrangement and methods for providing enriched visualization possibilities and user-controlled interaction experience with life-size and real-looking 3D models of real objects are provided. The arrangement comprising a virtual product assistant sub-system and the electronic panel system facilitates receiving real-time product information related to current state of 3D model representing real object displayed on the electronic panel system. Advanced user-controlled interactions with the displayed 3D model is made possible to perform deletion interaction, addition interaction, immersive interactions, linked movement interactions, interaction for getting un-interrupted view of internal parts using transparency-opacity effect, inter-interactions, liquid and fumes flow interactions, extrusive and intrusive interactions, time bound changes based interactions, environment mapping based interactions and engineering disintegration interactions with the displayed 3D model.
    Type: Grant
    Filed: March 25, 2014
    Date of Patent: January 10, 2017
    Inventors: Nitin Vats, Gaurav Vats
  • Publication number: 20160307357
    Abstract: Texturing of external and/or internal surfaces, or on internal parts of 3D models representing real objects, for providing extremely real-like, vivid and detailed view on and/or within the 3D-model, is made possible using a plurality of real photographs and/or video of the real objects. The 3D models are 3D computer graphics models used in user-controlled interactions implementation purpose. The view of texture on the 3D-model that is textured using real photographs and/or video replicates view of texture as on the real 3D object. Displaying realistic texture on 3D-model surface applying video as texture is made possible replicating real view of light blinking from a physical light emitting device of real object such as head light or rear light of an automotive vehicle.
    Type: Application
    Filed: March 15, 2014
    Publication date: October 20, 2016
    Inventors: NITIN VATS, GAURAV VATS
  • Patent number: 9405432
    Abstract: Method, technology and system of user-controlled realistic 3D simulation and interaction are disclosed for providing realistic and enhanced digital object viewing and interaction experience with improved three dimensional (3D) visualization effects. A solution is provided to make available 3D-model/s carrying similar properties of real object, where performing user-controlled realistic interactions selected from extrusive interaction, intrusive interactions, time-bound changes based interaction and real environment mapping based interactions are made possible as per user choice.
    Type: Grant
    Filed: July 19, 2013
    Date of Patent: August 2, 2016
    Inventors: Nitin Vats, Gaurav Vats