Patents by Inventor Arun Rakesh Yoganandan

Arun Rakesh Yoganandan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11966510
    Abstract: A method includes displaying a plurality of computer-generated objects, and obtaining finger manipulation data from a finger-wearable device via a communication interface. In some implementations, the method includes receiving an untethered input vector that includes a plurality of untethered input indicator values. Each of the plurality of untethered input indicator values is associated with one of a plurality of untethered input modalities. In some implementations, the method includes obtaining proxy object manipulation data from a physical proxy object via the communication interface. The proxy object manipulation data corresponds to sensor data associated with one or more sensors integrated in the physical proxy object. The method includes registering an engagement event with respect to a first one of the plurality of computer-generated objects based on a combination of the finger manipulation data, the untethered input vector, and the proxy object manipulation data.
    Type: Grant
    Filed: February 27, 2023
    Date of Patent: April 23, 2024
    Assignee: APPLE INC.
    Inventors: Adam G. Poulos, Aaron M. Burns, Arun Rakesh Yoganandan, Benjamin R. Blachnitzky, Nicolai Georg
  • Patent number: 11960657
    Abstract: A method includes, while displaying a computer-generated object at a first position within an environment, obtaining extremity tracking data from an extremity tracker. The first position is outside of a drop region that is viewable using the display. The method includes moving the computer-generated object from the first position to a second position within the environment based on the extremity tracking data. The method includes, in response to determining that the second position satisfies a proximity threshold with respect to the drop region, detecting an input that is associated with a spatial region of the environment. The method includes moving the computer-generated object from the second position to a third position that is within the drop region, based on determining that the spatial region satisfies a focus criterion associated with the drop region.
    Type: Grant
    Filed: March 21, 2023
    Date of Patent: April 16, 2024
    Inventors: Aaron M. Burns, Adam G. Poulos, Arun Rakesh Yoganandan, Benjamin Hylak, Benjamin R. Blachnitzky, Jordan A. Cazamias, Nicolai Georg
  • Patent number: 11853635
    Abstract: Content management includes receiving an image of one or more display devices with respective first content displayed. For at least one display device, a display device identification (ID) is retrieved. A change to the first content displayed at the at least one display device is caused, based on the display device ID of the at least one display device, upon interaction with the image.
    Type: Grant
    Filed: September 30, 2016
    Date of Patent: December 26, 2023
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Arun Rakesh Yoganandan, Kumi Akiyoshi, Melinda Yang, Yuchang Hu, Mark Schlager
  • Publication number: 20230376110
    Abstract: A method is performed at an electronic device with one or more processors, a non-transitory memory, a display, and an extremity tracker. The method includes obtaining extremity tracking data via the extremity tracker. The method includes displaying a computer-generated representation of a trackpad that is spatially associated with a physical surface. The physical surface is viewable within the display along with a content manipulation region that is separate from the computer-generated representation of the trackpad. The method includes identifying a first location within the computer-generated representation of the trackpad based on the extremity tracking data. The method includes mapping the first location to a corresponding location within the content manipulation region. The method includes displaying an indicator indicative of the mapping. The indicator may overlap the corresponding location within the content manipulation region.
    Type: Application
    Filed: February 27, 2023
    Publication date: November 23, 2023
    Inventors: Adam G. Poulos, Aaron M. Burns, Arun Rakesh Yoganandan, Benjamin R. Blachnitzky, Nicolai Georg
  • Publication number: 20230333645
    Abstract: In one implementation, a method of processing input for multiple devices is performed by a first electronic device one or more processors and non-transitory memory. The method includes determining a gaze direction. The method includes selecting a target electronic device based on determining that the gaze direction is directed to the target electronic device. The method includes receiving, via an input device, one or more inputs. The method includes processing the one or more inputs based on the target electronic device.
    Type: Application
    Filed: May 12, 2023
    Publication date: October 19, 2023
    Inventors: Alexis H. Palangie, Aaron M. Burns, Arun Rakesh Yoganandan, Benjamin R. Blachnitzky
  • Publication number: 20230333650
    Abstract: A method is performed at an electronic device with one or more processors, a non-transitory memory, a display, and a communication interface provided to communicate with a finger-wearable device. The method includes displaying first instructional content that is associated with a first gesture. The first instructional content includes a first object. The method includes determining an engagement score that characterizes a level of user engagement with respect to the first object. The method includes obtaining finger manipulation data from the finger-wearable device via the communication interface. The method includes determining that the finger-wearable device performs the first gesture based on a function of the finger manipulation data.
    Type: Application
    Filed: February 27, 2023
    Publication date: October 19, 2023
    Inventors: Benjamin Hylak, Aaron M. Burns, Adam G. Poulos, Arun Rakesh Yoganandan, Benjamin R. Blachnitzky, Nicolai Georg
  • Publication number: 20230333651
    Abstract: A method is performed at an electronic device with one or more processors, a non-transitory memory, a display, an extremity tracking system, and a communication interface provided to communicate with a finger-wearable device. The method includes displaying a computer-generated object on the display. The method includes obtaining finger manipulation data from the finger-wearable device via the communication interface. The method includes determining a multi-finger gesture based on extremity tracking data from the extremity tracking system and the finger manipulation data. The method includes registering an engagement event with respect to the computer-generated object according to the multi-finger gesture.
    Type: Application
    Filed: March 20, 2023
    Publication date: October 19, 2023
    Inventors: Aaron M. Burns, Adam G. Poulos, Arun Rakesh Yoganandan, Benjamin Hylak, Benjamin R. Blachnitzky, Nicolai Georg
  • Publication number: 20230325047
    Abstract: A method includes displaying a plurality of computer-generated objects, including a first computer-generated object at a first position within an environment and a second computer-generated object at a second position within the environment. The first computer-generated object corresponds to a first user interface element that includes a first set of controls for modifying a content item. The method includes, while displaying the plurality of computer-generated objects, obtaining extremity tracking data. The method includes moving the first computer-generated object from the first position to a third position within the environment based on the extremity tracking data. The method includes, in accordance with a determination that the third position satisfies a proximity threshold with respect to the second position, merging the first computer-generated object with the second computer-generated object in order to generate a third computer-generated object for modifying the content item.
    Type: Application
    Filed: March 15, 2023
    Publication date: October 12, 2023
    Inventors: Nicolai Georg, Aaron M. Burns, Adam G. Poulos, Arun Rakesh Yoganandan, Benjamin Hylak, Benjamin R. Blachnitzky
  • Publication number: 20230325004
    Abstract: Methods for interacting with objects and user interface elements in a computer-generated environment provide for an efficient and intuitive user experience. In some embodiments, a user can directly or indirectly interact with objects. In some embodiments, while performing an indirect manipulation, manipulations of virtual objects are scaled. In some embodiments, while performing a direct manipulation, manipulations of virtual objects are not scaled. In some embodiments, an object can be reconfigured from an indirect manipulation mode into a direct manipulation mode by moving the object to a respective position in the three-dimensional environment in response to a respective gesture.
    Type: Application
    Filed: March 10, 2023
    Publication date: October 12, 2023
    Inventors: Aaron M. BURNS, Alexis H. PALANGIE, Nathan GITTER, Nicolai GEORG, Benjamin R. BLACHNITZKY, Arun Rakesh YOGANANDAN, Benjamin HYLAK, Adam G. POULOS
  • Publication number: 20230315202
    Abstract: A method includes displaying a plurality of computer-generated objects, and obtaining finger manipulation data from a finger-wearable device via a communication interface. In some implementations, the method includes receiving an untethered input vector that includes a plurality of untethered input indicator values. Each of the plurality of untethered input indicator values is associated with one of a plurality of untethered input modalities. In some implementations, the method includes obtaining proxy object manipulation data from a physical proxy object via the communication interface. The proxy object manipulation data corresponds to sensor data associated with one or more sensors integrated in the physical proxy object. The method includes registering an engagement event with respect to a first one of the plurality of computer-generated objects based on a combination of the finger manipulation data, the untethered input vector, and the proxy object manipulation data.
    Type: Application
    Filed: February 27, 2023
    Publication date: October 5, 2023
    Inventors: Adam G. Poulos, Aaron M. Burns, Arun Rakesh Yoganandan, Benjamin R. Blachnitzky, Nicolai Georg
  • Patent number: 11768546
    Abstract: In one implementation, a method for visually indicating positional/rotational information of a finger-wearable device. The method includes: determining a set of translational values and a set of rotational values for the finger-wearable device, wherein the finger-wearable device is worn on a finger of a user; displaying, via a display, a visual representation of a location of the finger-wearable device based on the set of translational values; generating a visual representation of a grasp region of the user based on the set of translational values and the set of rotational values; and concurrently displaying, via the display, the visual representation of the grasp region with the visual representation of the location of the finger-wearable device.
    Type: Grant
    Filed: August 22, 2021
    Date of Patent: September 26, 2023
    Assignee: APPLE INC.
    Inventors: Adam Gabriel Poulos, Benjamin Rolf Blachnitzky, Nicolai Philip Georg, Arun Rakesh Yoganandan, Aaron Mackay Burns
  • Publication number: 20230297168
    Abstract: A method is performed at an electronic device with one or more processors, a non-transitory memory, a display, and a communication interface provided to communicate with a finger-wearable device. The method includes, while displaying a plurality of content items on the display, obtaining finger manipulation data from the finger-wearable device via the communication interface. The method includes selecting a first one of the plurality of content items based on a first portion of the finger manipulation data in combination with satisfaction of a proximity threshold. The first one of the plurality of content items is associated with a first dimensional representation. The method includes changing the first one of the plurality of content items from the first dimensional representation to a second dimensional representation based on a second portion of the finger manipulation data.
    Type: Application
    Filed: March 15, 2023
    Publication date: September 21, 2023
    Inventors: Nicolai Georg, Aaron M. Burns, Adam G. Poulos, Arun Rakesh Yoganandan, Benjamin Hylak, Benjamin R. Blachnitzky
  • Publication number: 20230297172
    Abstract: A method includes, while displaying a computer-generated object at a first position within an environment, obtaining extremity tracking data from an extremity tracker. The first position is outside of a drop region that is viewable using the display. The method includes moving the computer-generated object from the first position to a second position within the environment based on the extremity tracking data. The method includes, in response to determining that the second position satisfies a proximity threshold with respect to the drop region, detecting an input that is associated with a spatial region of the environment. The method includes moving the computer-generated object from the second position to a third position that is within the drop region, based on determining that the spatial region satisfies a focus criterion associated with the drop region.
    Type: Application
    Filed: March 21, 2023
    Publication date: September 21, 2023
    Inventors: Aaron M. Burns, Adam G. Poulos, Arun Rakesh Yoganandan, Benjamin Hylak, Benjamin R. Blachnitzky, Jordan A. Cazamias, Nicolai Georg
  • Publication number: 20230095282
    Abstract: In one implementation, a method for displaying a first pairing affordance that is world-locked to a first peripheral device. The method may be performed by an electronic device including a non-transitory memory, one or more processors, a display, and one or more input devices. The method includes detecting the first peripheral device within a three-dimensional (3D) environment via a computer vision technique. The method includes receiving, via the one or more input devices, a first user input that is directed to the first peripheral device within the 3D environment. The method includes, in response to receiving the first user input, displaying, on the display, the first pairing affordance that is world-locked to the first peripheral device within the 3D environment.
    Type: Application
    Filed: September 22, 2022
    Publication date: March 30, 2023
    Inventors: Benjamin R. Blachnitzky, Aaron M. Burns, Anette L. Freiin von Kapri, Arun Rakesh Yoganandan, Benjamin H. Boesel, Evgenii Krivoruchko, Jonathan Ravasz, Shih-Sang Chiu
  • Patent number: 11221823
    Abstract: A method includes receiving a voice input at an electronic device. An ambiguity of the voice input is determined. The ambiguity is resolved based on contextual data. The contextual data includes at least one of: an image, a non-voice input comprising a gesture, a pointer of a pointing device, a touch, or a combination thereof.
    Type: Grant
    Filed: December 28, 2017
    Date of Patent: January 11, 2022
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Arun Rakesh Yoganandan, Kumi Akiyoshi, Tais C. Mauk, Chang Long Zhu Jin, Jung Won Hong
  • Patent number: 11216149
    Abstract: An electronic device includes a touchscreen display and at least one processor. The at least one processor is configured to present a dynamic user interface on the touchscreen display. The dynamic user interface includes a first input region forming a dial. The first input region is configured to receive user input related to a first viewing direction associated with a 360° video. The at least one processor is also configured to detect the user input. In addition, the at least one processor is configured to transmit the user input to a display device to control presentation of the 360° video on the display device. The dynamic user interface may further include a second input region, where the second input region is configured to receive second user input related to a second viewing direction associated with the 360° video.
    Type: Grant
    Filed: March 15, 2019
    Date of Patent: January 4, 2022
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Arun Rakesh Yoganandan, Yuchang Hu, Chang Long Zhu Jin, Kristen Kator
  • Patent number: 11209937
    Abstract: A hover touch controller device includes a touch sensor having a touch surface and a proximity sensor. The touch sensor provides two-dimensional position information on when and where a user's finger touches the touch surface. The proximity sensor provides three-dimensional position information on pre-touch events. The pre-touch events corresponding to the user's finger hovering over the touch surface within some maximum depth. The hover touch controller device further includes a processor. The processor determines from the three-dimensional information a hover point projected on the touch surface and determines from the two-dimensional information a touch point on the touch surface. The processor communicates the hover point and the contact point to a display device. This can include correcting for any perceived user interaction issues associated an offset between the hover point and the touch point.
    Type: Grant
    Filed: July 8, 2019
    Date of Patent: December 28, 2021
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Arun Rakesh Yoganandan, Jee Hoon Kim, Chang Long Zhu Jin
  • Patent number: 10969899
    Abstract: A hover touch controller device includes a touch sensor having a touch surface with a first aspect ratio and a proximity sensor. Information on when and where a user touches the touch surface is detected. Additionally, information on hover-input events is detected. The hover input events correspond to a user's finger hovering over the touch surface within some maximum depth. The hover touch device further includes a controller. The processor communicates three-dimensional spatial information to a Graphical User Interface (GUI). The GUI generates visualizations based on the hover events and the touch events on a display having an interactive surface with a second aspect ratio. The processor further corrects for any issues associated with supporting a variety of GUI designs. This can include correcting for a difference between the first aspect ratio and the second aspect ratio.
    Type: Grant
    Filed: July 19, 2019
    Date of Patent: April 6, 2021
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Arun Rakesh Yoganandan, Chang Long Zhu Jin
  • Publication number: 20210019010
    Abstract: A hover touch controller device includes a touch sensor having a touch surface with a first aspect ratio and a proximity sensor. Information on when and where a user touches the touch surface is detected. Additionally, information on hover-input events is detected. The hover input events correspond to a user's finger hovering over the touch surface within some maximum depth. The hover touch device further includes a controller. The processor communicates three-dimensional spatial information to a Graphical User Interface (GUI). The GUI generates visualizations based on the hover events and the touch events on a display having an interactive surface with a second aspect ratio. The processor further corrects for any issues associated with supporting a variety of GUI designs. This can include correcting for a difference between the first aspect ratio and the second aspect ratio.
    Type: Application
    Filed: July 19, 2019
    Publication date: January 21, 2021
    Inventors: Arun Rakesh Yoganandan, Chang Long Zhu Jin
  • Publication number: 20210011604
    Abstract: A hover touch controller device includes a touch sensor having a touch surface and a proximity sensor. The touch sensor provides two-dimensional position information on when and where a user's finger touches the touch surface. The proximity sensor provides three-dimensional position information on pre-touch events. The pre-touch events corresponding to the user's finger hovering over the touch surface within some maximum depth. The hover touch controller device further includes a processor. The processor determines from the three-dimensional information a hover point projected on the touch surface and determines from the two-dimensional information a touch point on the touch surface. The processor communicates the hover point and the contact point to a display device. This can include correcting for any perceived user interaction issues associated an offset between the hover point and the touch point.
    Type: Application
    Filed: July 8, 2019
    Publication date: January 14, 2021
    Inventors: Arun Rakesh Yoganandan, Jee Hoon Kim, Chang Long Zhu Jin