Patents by Inventor Jayesh Rajkumar VACHHANI

Jayesh Rajkumar VACHHANI has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240321318
    Abstract: A method may include obtaining a plurality of frames of at least one video; determining an occurrence of a change from a first event of the at least one video to a second event of the video; determining a location in the plurality of frames; inserting one or more masked frames at the location between at least one frame of the first event and at least one frame of the second event; determining at least one transition component present in the at least one frame of the first event and the at least one frame of the second event; determining a motion of pixels for the at least one transition component across the plurality of frames of the first event and the plurality of frames of the second event; providing at least one transition effect in the one or more masked frames between the first event and the second event.
    Type: Application
    Filed: March 21, 2024
    Publication date: September 26, 2024
    Applicant: Samsung Electronics Co., Ltd.
    Inventors: Jayesh Rajkumar VACHHANI, Sourabh Vasant GOTHE, Vibhav AGARWAL, Barath Raj Kandur RAJA, Likhith AMARVAJ, Rishabh KHURANA, Satyam KUMAR, Pranay KASHYAP, Karri Hima Satya HEMANTH, Himanshu ARORA, Yashwant SAINI, Sourav GHOSH
  • Publication number: 20240304010
    Abstract: A method for detecting artificial intelligence (AI) generated content in a video, includes: receiving the video comprising a plurality of frames; detecting, an object, a person, and a background in each frame; determining pixel-motion information of each pixel in each frame; determining a relationship among the object, the person, and the background and the corresponding pixel-motion information in each frame; determining one or more intrinsic properties of the object, the person, and the background in each frame based on the relationship among the object, the person, and the background and the corresponding pixel-motion information; detecting inconsistent motion of the object, the person, and the background in at least one frame of the video based on the one or more intrinsic properties of the object, the person, and the background; and indicating AI generated content in the at least one frame based on the detected inconsistent motion.
    Type: Application
    Filed: March 19, 2024
    Publication date: September 12, 2024
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Sourabh Vasant Gothe, Vibhav Agarwal, Jayesh Rajkumar Vachhani, Sourav Ghosh
  • Publication number: 20230377717
    Abstract: Provided is a method for predicting emotion of a user by an electronic device. The method includes receiving, by the electronic device, a user context, a device context and an environment context from the electronic device and one or more other electronic device connected to the electronic device and determining, by the electronic device, a combined representation of the user context, the device context and the environment context. The method also includes determining, by the electronic device, a plurality of user characteristics based on the combined representation of the user context, the device context and the environment context; and predicting, by the electronic device, an emotion of the user based on the combined representation of the user context, the device context, the environment context and the plurality of user characteristics.
    Type: Application
    Filed: July 31, 2023
    Publication date: November 23, 2023
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Barath Raj KANDUR RAJA, Sumit Kumar, Sriram Shashank, Harichandahana Bhogaraju Sawaraja Sai, Chinmay Anand, Jayesh Rajkumar Vachhani, Ankita Bhardwaj, Shwetank Choudhary, Srishti Malaviya, Tarun Gopalakrishnan, Dwaraka Bhamidipati Sreevastsa
  • Publication number: 20230368534
    Abstract: A method for generating at least one segment of a video by an electronic device is provided. The method includes identifying at least one of a context associated with the video and an interaction of a user in connection with the video, analyzing at least one parameter in at least one frame of the video with reference to at least one of the context and the interaction of the user, wherein the at least one parameter includes at least one of a subject, an environment, an action of the subject, and an object, determining the at least one frame in which a change in the at least one parameter occurs, and generating at least one segment of the video comprising the at least one frame in which the parameter changed as a temporal boundary of the at least one segment.
    Type: Application
    Filed: July 24, 2023
    Publication date: November 16, 2023
    Inventors: Jayesh Rajkumar VACHHANI, Sourabh Vasant GOTHE, Barath Raj KANDUR RAJA, Pranay KASHYAP, Rakshith SRINIVAS, Rishabh KHURANA
  • Patent number: 11776289
    Abstract: Embodiments herein disclose a method and electronic device for predicting multi-modal drawings. The method includes: receiving, by the electronic device, at least one of a text input and strokes of a drawing and determining, by the electronic device, features associated with the text input and features associated with the strokes of the drawing. The method includes classifying, by the electronic device, the features associated with the text input and the features associated with the strokes of the drawing into one of a dominant feature and a non-dominant feature and performing, by the electronic device, early concatenation or late concatenation of the features based on the classification; classifying, by the electronic device, the strokes of the drawing based on the concatenation into a category using a deep neural network (DNN) model; and predicting, by the electronic device, primary drawings corresponding to the category.
    Type: Grant
    Filed: May 10, 2022
    Date of Patent: October 3, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Sourabh Vasant Gothe, Rakshith S, Jayesh Rajkumar Vachhani, Yashwant Singh Saini, Barath Raj Kandur Raja, Himanshu Arora, Rishabh Khurana
  • Publication number: 20220301331
    Abstract: Embodiments herein disclose a method and electronic device for predicting multi-modal drawings. The method includes: receiving, by the electronic device, at least one of a text input and strokes of a drawing and determining, by the electronic device, features associated with the text input and features associated with the strokes of the drawing. The method includes classifying, by the electronic device, the features associated with the text input and the features associated with the strokes of the drawing into one of a dominant feature and a non-dominant feature and performing, by the electronic device, early concatenation or late concatenation of the features based on the classification; classifying, by the electronic device, the strokes of the drawing based on the concatenation into a category using a deep neural network (DNN) model; and predicting, by the electronic device, primary drawings corresponding to the category.
    Type: Application
    Filed: May 10, 2022
    Publication date: September 22, 2022
    Inventors: Sourabh Vasant GOTHE, Rakshith S, Jayesh Rajkumar Vachhani, Yashwant Singh Saini, Barath Raj Kandur Raja, Himanshu Arora, Rishabh Khurana
  • Publication number: 20210209289
    Abstract: An apparatus and method for generating a customized content are provided. An apparatus for generating a customized content, may include: at least one memory configured to store one or more instructions; at least one processor configured to execute the one or more instructions to: (1) obtain an input from a user; (2) detect, from the input, at least one feature and modality of the input among a plurality of modalities comprising a text format, a sound format, a still image format, and a moving image format; (3) determine a mode of the customized content, from a plurality of modes, based on the at least one feature and the modality of the input, the plurality of modes including an image mode and a text mode; and (4) generate the customized content based on the determined mode, and a display configured to display the customized content.
    Type: Application
    Filed: January 7, 2021
    Publication date: July 8, 2021
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Barath Raj KANDUR RAJA, Sumit KUMAR, Sanjana TRIPURAMALLU, Vibhav AGARWAL, Ankur AGARWAL, Chinmay ANAND, Likhith AMARVAJ, Shashank SRIRAM, Himanshu ARORA, Jayesh Rajkumar VACHHANI, Kranti CHALAMALASETTI, Rishabh KHURANA, Dwaraka Bhamidipati SREEVATSA, Raju Suresh DIXIT