Including Orientation Sensors (e.g., Infrared, Ultrasonic, Remotely Controlled) Patents (Class 345/158)
-
Patent number: 12164697Abstract: This disclosure provides an in-vehicle mid-air gesture-based interaction method, an electronic apparatus, and a system, and relates to the field of intelligent vehicle technologies. The method includes: obtaining a first mid-air gesture detected by a camera; and starting, when a preset response operation corresponding to the first mid-air gesture matches a first user who initiates the first mid-air gesture, the preset response operation corresponding to the first mid-air gesture in response to the first mid-air gesture. The method can be used in an in-vehicle mid-air gesture-based interaction scenario, reduce a mid-air gesture operation rate, and improve driving safety and interaction experience.Type: GrantFiled: December 28, 2022Date of Patent: December 10, 2024Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Qiuyuan Tang, Shuaihua Peng, Hao Li
-
Patent number: 12154011Abstract: A wrist-mounted system for tracking hand poses includes one or more cameras mounted to a wearable band. In some examples, the one or more cameras are low profile and may be located less than 15 mm, and preferably less than 10 mm, of the wrist of the user. In some examples, a system includes a wearable band and one or more imaging sensors. The one or more imaging sensors are disposed to have a field of view that is anatomically distal when the wearable band is coupled to an arm of the user. The one or more imaging sensors each define an optical axis spaced from the wearable band of less than 15 mm, and preferably less than 10 mm. The image data may come from a single camera mounted on the back of the wrist which captures images of the surface contours of the back of the hand to infer hand poses and gestures.Type: GrantFiled: April 26, 2021Date of Patent: November 26, 2024Assignees: Cornell University, Wisconsin Alumni Research FoundationInventors: Cheng Zhang, Fang Hu, Yin Li
-
Patent number: 12153728Abstract: An optical system for a virtual retina display and a gesture detection of a user of the virtual retina display. The optical system includes a projector unit, an image source, and an image processing unit. The projector unit includes a first, second, and a third light source, and a first controllable deflection unit for scanning deflection of first, second, and third light beams. The optical system further includes a second deflection unit designed to transmit the first and second scanned light beams and to deflect the third light beam into a gesture detection area of the user. The optical system further includes a deflection unit, onto which the image content is projectable and which is configured to direct the projected image content and the second light beam onto an eye of a user.Type: GrantFiled: November 10, 2023Date of Patent: November 26, 2024Assignee: ROBERT BOSCH GMBHInventors: Johannes Fischer, Johannes Meyer
-
Patent number: 12147981Abstract: Systems and methods are disclosed for device movement-based authentication. One method comprises receiving contextual data from one or more sensors of a user device and determining a device movement pattern based on the received contextual data. The determined device movement pattern is compared with a device movement-based signature associated with a user of the user device. If the determined device movement pattern matches the device-movement based signature within a predetermined threshold, the user is authenticated for an electronic transaction. If the determined device movement pattern does not match the device-movement based signature, a notification indicating authentication failure is sent to the user device.Type: GrantFiled: May 15, 2019Date of Patent: November 19, 2024Assignee: Worldpay, LLCInventor: Daren L. Pickering
-
Patent number: 12141381Abstract: The present disclosure is directed to selective gesture recognition for handheld device gestures. An example method includes receiving, by a handheld interactive object, movement information descriptive of a gesture performed with the handheld interactive object. The method includes selecting a local and/or remote machine-learned model for processing the movement information. The movement information can be processed to identify a gesture action corresponding to the movement information. The local and/or remote machine-learned model can be selected based on user input data and/or a complexity of the movement information. In response to selecting the local machine-learned model, the method includes processing the movement information according to the local machine-learned model and communicating a message to a remote device based on the result.Type: GrantFiled: February 22, 2021Date of Patent: November 12, 2024Assignee: GOOGLE LLCInventors: Dev Bhargava, Alejandro Kauffmann
-
Patent number: 12135369Abstract: Apparatus and associated methods relate to an array of individually readable distance sensors disposed along a first axis on a platform and configurable to detect penetration of a first plane containing the first axis, and an array of individually controllable light emitting indicators disposed on the platform along at least a second axis and configurable to emit visual indicia to a user out of the first plane. The visual indicia may, for example, be associated with the detected penetration. A reconfigurable predetermined detection window may, for example, be generated by associating adjacent sensors detecting input during a teaching operation. The detection window may, for example, be further generated by determining at least one distance threshold profile as a function of input received from the adjacent sensors during the teaching operation. Various embodiments may advantageously enable efficient configuration of generic sensing and indication units.Type: GrantFiled: January 20, 2021Date of Patent: November 5, 2024Assignee: BANNER ENGINEERING CORP.Inventor: Charles Dolezalek
-
Patent number: 12131544Abstract: A method for capturing motion of an object, the method comprising: installing at least one marker on the object; bringing the object having the at least one marker installed thereon in an acquisition volume; arranging at least two event-based light sensors such that respective fields of view of the at least two event-based light sensors cover the acquisition volume, wherein each event-based light sensor has an array of pixels; receiving events asynchronously from the pixels of the at least two event-based light sensors depending on variations of incident light from the at least one marker sensed by the pixels; and processing the events to position the at least one marker within the acquisition volume and capture motion of the object.Type: GrantFiled: May 14, 2020Date of Patent: October 29, 2024Assignee: PROPHESEEInventors: Nicolas Bourdis, Davide Migliore
-
Patent number: 12125219Abstract: A system for performing synergistic object tracking and pattern recognition for event representation includes a computing platform having processing hardware and a system memory storing a software code. The processing hardware is configured to execute the software code to receive event data corresponding to one or more propertie(s) of an object, to generate, using the event data, a location data estimating a location of each of multiple predetermined landmarks of the object, and to predict, using one or both of the event data and the location data, a pattern corresponding to the propertie(s) of the object. The processing hardware is further configured to execute the software code to update, using the predicted pattern, the location data, and to merge the updated location data and the predicted pattern to provide merged data.Type: GrantFiled: April 4, 2023Date of Patent: October 22, 2024Assignee: Disney Enterprises, Inc.Inventors: Keith Comito, Kevin Prince
-
Patent number: 12125159Abstract: A device receives an image including image data of a scale model of a vehicle, and processes the image data, with a model, to identify a make, a model, and a year represented by the scale model. The device determines augmented reality (AR) vehicle information based on the make, the model, and the year represented by the scale model of the vehicle, and provides the AR vehicle information to enable a user device to associate the AR vehicle information with the image of the scale model of the vehicle. The device receives an input associated with the AR vehicle information, and determines updated AR vehicle information based on the input associated with the AR vehicle information. The device provides the updated AR vehicle information to enable the user device to associate the updated augmented reality vehicle information with the image of the scale model of the vehicle.Type: GrantFiled: August 8, 2023Date of Patent: October 22, 2024Assignee: Capital One Services, LLCInventors: Micah Price, Qiaochu Tang, Jason Hoover, Stephen Wylie, Geoffrey Dagley, Kristen Przano
-
Patent number: 12118147Abstract: A grippable haptic device includes an input device that receives a user input, a 9-axis sensor that detects movement of a hand of the user, a vibrator that provides a tactile sensation to the user, and a motion sensor that detects whether the user grips the grippable haptic device. An insole-type haptic device includes an input device that receives a user input, a plurality of 9-axis sensors that detect movement of a foot of the user, a vibrator that provides a tactile sensation to a sole of the foot of the user, and a plurality of pressure sensors that are distributed across the insole-type haptic device and measure a pressure exerted at each position of the sole of the user.Type: GrantFiled: June 21, 2022Date of Patent: October 15, 2024Assignee: WHOBORN INC.Inventor: Young Sik Bae
-
Patent number: 12117883Abstract: An information handling system mouse includes a position sensor that detects movement and a push button that detects button press inputs for communication to an information handling system. The position sensor detects positions with a high sensitivity having a higher power consumption or a low sensitivity having a lower sensitivity. The push button detects button press inputs with a high scan rate having a higher power consumption or a low scan rate having a lower power consumption. The mouse configures to plural high and low power consumption modes based upon a detected usage pattern and/or a command from an information handling system, such as to adapt the mouse to use by different applications executing on the information handling system.Type: GrantFiled: June 14, 2022Date of Patent: October 15, 2024Assignee: Dell Products L.P.Inventors: Peng Lip Goh, Deeder M. Aurongzeb
-
Patent number: 12112449Abstract: Presenting an image of a scene may include capturing an image of a scene by a camera of an electronic device, wherein the electronic device comprises the camera and a display, and wherein the camera and the display have a first spatial relationship, determining a second spatial relationship between a viewpoint and the display of the electronic device, warping the image to obtain an image of a first portion of the scene based on the first spatial relationship and the second spatial relationship, and presenting the warped image on the display, wherein, from the viewpoint, the image of the first portion of the scene is substantially contiguous with a second portion of the scene visible outside an edge of the electronic device.Type: GrantFiled: June 20, 2023Date of Patent: October 8, 2024Assignee: Apple Inc.Inventors: Brett D. Miller, Minwoong Kim, Ricardo J. Motta
-
Patent number: 12111995Abstract: A method performed by a device (e.g., touch controller) includes deriving a pen position, and transmitting a report indicating the derived pen position to a host computer. The report includes a first area storing position data indicating the derived position and a second area storing non-position data different from the position data. The method includes securing a buffer area for retaining a plurality of the reports in a memory, deriving the pen position at each determined scanning time, and generating a report including the position data indicating the derived position. The method includes storing the report in the buffer area and, in response to obtaining the non-position data, writing the non-position data in the second area of each of one or more of the reports stored in the buffer area. The method includes transmitting the plurality of the reports stored in the buffer area to the host computer.Type: GrantFiled: February 8, 2023Date of Patent: October 8, 2024Assignee: Wacom Co., Ltd.Inventors: Sadao Yamamoto, Yoshio Nomura
-
Patent number: 12112037Abstract: The present disclosure generally relates to techniques and interfaces for managing media playback devices. In some embodiments, the techniques include varying a feedback based on movement of a computer system toward or away from an external device. In some embodiments, the techniques include displaying an interface that includes controls for controlling media playback on an external device when the computer system and the external device are playing media. In some embodiments, the techniques include performing operations at a computer system in response to an input having a size that is less than or greater than a size threshold. In some embodiments, the techniques include performing different operations at a computer system when status lights have states that indicate different states of the computer system.Type: GrantFiled: August 3, 2023Date of Patent: October 8, 2024Assignee: Apple Inc.Inventors: Taylor G. Carrigan, Patrick L. Coffman, Pedro Mari, Camille Moussette, Gemma Alexandria Roper, Peter C. Tsoi
-
Patent number: 12112011Abstract: Some examples of the disclosure are directed to methods for application-based spatial refinement in a multi-user communication session including a first electronic device and a second electronic device. While the first electronic device is presenting a three-dimensional environment, the first electronic device receives an input corresponding to a request to move a shared object in the three-dimensional environment. In accordance with a determination that the shared object is an object of a first type, the first electronic device moves the shared object and an avatar of a user in the three-dimensional environment in accordance with the input. In accordance with a determination that the shared object is an object of a second type, different from the first type, and the input is a first type of input, the first electronic device moves the shared object in the three-dimensional environment in accordance with the input, without moving the avatar.Type: GrantFiled: September 11, 2023Date of Patent: October 8, 2024Assignee: Apple Inc.Inventors: Connor A. Smith, Christopher D. McKenzie, Nathan Gitter
-
Patent number: 12102406Abstract: A system and method of repositioning input control devices includes an operator workstation for controlling a computer-assisted device includes an input control device for use by an operator, one or more sensors, and a controller coupled to the one or more sensors and the input control device. The controller is configured to determine whether the operator is interacting with the input control device using the one or more sensors and in response to determining a lack of operator interaction with the input control device, determine a trajectory for moving the input control device from a current position or orientation toward a desired position or orientation, and move the input control device along the trajectory. In some embodiments, the controller is further configured to stop movement of the input control device along the trajectory in response to detecting a stopping condition.Type: GrantFiled: October 22, 2018Date of Patent: October 1, 2024Assignee: INTUITIVE SURGICAL OPERATIONS, INC.Inventor: Lawton N. Verner
-
Patent number: 12097030Abstract: The present disclosure provides a system and method for monitoring the cognitive state of a patient based on eye image data. The patient monitoring system including a camera unit configured for recording images of an eye of the patient, and a data processing sub-system in data communication with the camera and being operable to (i) receive and process eye image data from said camera, (ii) classify said eye image data into gestures and identify such gestures indicative of the cognitive state of the patient, and (iii) transmit a signal communicating said cognitive state to a remote unit. The system may further include an actuator module and an output unit wherein said output may be an automated medical questionnaire.Type: GrantFiled: August 5, 2020Date of Patent: September 24, 2024Assignee: EYEFREE ASSISTING COMMUNICATION, LTD.Inventors: Or Retzkin, Itai Kornberg
-
Patent number: 12101551Abstract: An image capturing apparatus including a first image capturing section for capturing an image of an object, a second image capturing section for performing image capturing in a rear direction, and an operation section configured to receive an operation input by a user. In a case where no operation on the operation section is performed after an image of an object has been captured by the first image capturing section, whether or not a user has intention of capturing an image is determined based on an image captured by the second image capturing section. In a case where it is determined that the user does not have the intention of capturing an image, the first image capturing section is set o a first operation state rather than a second operation state larger in power consumption than the first operation state.Type: GrantFiled: December 2, 2022Date of Patent: September 24, 2024Assignee: CANON KABUSHIKI KAISHAInventor: Daisuke Enomoto
-
Patent number: 12093464Abstract: A method of interpreting a digit-to-digit gesture based on roll values for a wrist-wearable device is disclosed. The method includes, receiving an indication at a first point in time that a user donning the wrist-wearable device is providing a digit-to-digit gesture in which one of the user's digits touches another of the user's digits without contacting the display of the wearable device. The method further includes, in accordance with a determination that the digit-to-digit gesture is provided while data indicates that the wrist-wearable device has a first roll value, causing a target device in communication with the wearable device to perform a first input command and receiving another indication at a second point in time that is after the first point in time that the user is providing the digit-to-digit gesture again, causing the target device to perform a second input command that is distinct from the first input command.Type: GrantFiled: August 31, 2022Date of Patent: September 17, 2024Assignee: META PLATFORMS TECHNOLOGIES, LLCInventors: Yinglin Li, Qian Chen, Chengyuan Yan
-
Patent number: 12093473Abstract: An information handling system stylus transmits a wireless signal at a writing tip to enhance touch detection of the writing tip by a touchscreen display and receives wireless signals from the touchscreen display at a receiving antenna. To enhance control of wireless energy distributed at the writing tip, the receiving antenna is selectively coupled to the writing tip, such as by transitioning from a float of the receiving antenna to an interface with the stylus power source at transmit by the writing tip. Charge at the receiving antenna helps to shape the energy distribution from the writing tip, such as to match the energy distribution of other styluses in use at the touchscreen display.Type: GrantFiled: July 31, 2023Date of Patent: September 17, 2024Assignee: Dell Products L.P.Inventors: Kuo-Wei Tseng, How-Lan Eric Lin, Yu-Chen Liu, Chi-Fong Lee, Wei-Chou Chen
-
Patent number: 12093465Abstract: Methods and systems for gesture-based control of a device are described. An input frame is processed to determine a location of a distinguishing anatomical feature in the input frame. A virtual gesture-space is defined based on the location of the distinguishing anatomical feature, the virtual gesture-space being a defined space for detecting a gesture input. The input frame is processed in only the virtual gesture-space, to detect and track a hand. Using information generated from detecting and tracking the at least one hand, a gesture class is determined for the at least one hand. The device may be a smart television, a smart phone, a tablet, etc.Type: GrantFiled: September 22, 2022Date of Patent: September 17, 2024Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Juwei Lu, Sayem Mohammad Siam, Wei Zhou, Peng Dai, Xiaofei Wu, Songcen Xu
-
Patent number: 12094434Abstract: Various embodiments of the present disclosure relate to an electronic device for changing attributes of a display, and an operation method in an electronic device. The electronic device may be configured to: acquire user information; set user period information for changing attributes of the display based on the acquired user information; acquire information associated with content being displayed on the display or has been determined to be displayed on the display; change an attribute setting value of the display based on set user period information and the information associated with the content; and control the display to display content based on the changed attribute setting value.Type: GrantFiled: May 11, 2022Date of Patent: September 17, 2024Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Seungryeol Kim, Wonhee Choe
-
Patent number: 12095969Abstract: A motion sensory and imaging device capable of acquiring imaging information of the scene and providing at least a near real time pass-through of imaging information to a user. The sensory and imaging device can be used stand-alone or coupled to a wearable or portable device to create a wearable sensory system capable of presenting to the wearer the imaging information augmented with virtualized or created presentations of information.Type: GrantFiled: September 28, 2023Date of Patent: September 17, 2024Assignee: ULTRAHAPTICS IP TWO LIMITEDInventors: David S. Holz, Neeloy Roy, Hongyuan He
-
Patent number: 12085390Abstract: A method includes obtaining gravity data of a terminal, determining that the terminal is in a first posture in a first time period determining a movement direction of a user in the first time period based on the first posture and azimuth data, determining that the terminal is in an unstable state in a second time period, determining a movement direction of the user in the second time period based on the movement direction in the first time period, determining that the terminal is in a stable state in a third time period, determining, by the terminal, a movement direction of the user in the third time period based on the movement direction in the second time period and azimuth data, and determining, by the terminal, a movement track of the user based on the movement directions of the user in all the time periods.Type: GrantFiled: August 8, 2018Date of Patent: September 10, 2024Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventor: Qi Li
-
Patent number: 12076247Abstract: Presented are methods and systems for determining, monitoring, and displaying the relative positioning of two rigid bodies during surgery. In particular, the present disclosure relates to methods and systems for positioning a prosthesis relative to a bone during a surgery as well as to systems and methods for verifying resulting relative positioning of adjacent bones.Type: GrantFiled: December 21, 2021Date of Patent: September 3, 2024Assignee: Intellijoint Surgical Inc.Inventors: Richard Tyler Fanson, Andre Novomir Hladio, Armen Garo Bakirtzian
-
Patent number: 12078907Abstract: An optical display system includes a first display, a plurality of electronic drivers, a controller, and a combiner. Light from a scene is combined with image light from the first display, and the combined light presented to an observer. The combiner includes an electrochromic layer comprising one or more electrochromic regions disposed between the scene and the combiner. The electronic drivers are arranged to electrically connect with and drive respective of the electrochromic regions. The controller is configured to control the plurality of electronic drivers to individually address each of the electrochromic regions to selectively drive some of the electrochromic regions to change light transmission of the selectively driven electrochromic regions.Type: GrantFiled: October 2, 2023Date of Patent: September 3, 2024Assignee: Rockwell Collins, inc.Inventors: Carlo L. Tiana, Eric P. Stratton, Christopher A. Keith, Robert D. Brown, Robert W. Wood
-
Patent number: 12069395Abstract: An electronic device, computer program product, and method are provided that automatically focus on meaningful content in an image stream during an instruction-oriented video communication session. The electronic device communicatively connects the electronic device via a network to second electronic device(s) during a video communication session. A controller of the electronic device identifies a person and a writing surface within an image stream from a local image capturing device. In response to determining that the person is attending to the writing surface, the controller communicates to a video communication session a first portion of the image stream that focusses on the writing surface. In response to determining that the person is not attending to the writing surface, the controller communicates to the video communication session via the at least one network interface, a second portion of the image stream that does not focus on the writing surface.Type: GrantFiled: October 15, 2021Date of Patent: August 20, 2024Assignee: Motorola Mobility LLCInventors: Amit Kumar Agrawal, Rahul B Desai
-
Patent number: 12061734Abstract: An information processing apparatus according to the present disclosure includes a display control unit that causes a display device to display a mark for a virtual object at an instruction position that is a position determined on the basis of a plurality of directions indicated by a user.Type: GrantFiled: June 3, 2020Date of Patent: August 13, 2024Assignee: SONY GROUP CORPORATIONInventors: Miwa Ichikawa, Jun Kimura, Ai Nakada, Soichiro Inatani, Takuro Noda, Kunihito Sawai, Shingo Utsuki, Kenji Sugihara
-
Patent number: 12056208Abstract: A method and an apparatus for performing a localization of a movable treatment device having an inertial sensor and configured to treat a target surface. A motion pattern recognition device discriminates between two or more motion patterns contained in a set of motion patterns. An interface provides at least one inertial sensor data from the inertial sensor to the motion pattern recognition device. At least one inertial sensor data represents a movement of the movable treatment device. A neural network is configured to receive the at least one inertial sensor data and to map the at least one inertial sensor data to at least one motion pattern contained in the set of motion patterns associated with one or more different zones of the target surface so that the mapping of the at least one inertial sensor data with the at least one motion pattern indicates an estimation of the location of the device with respect to at least one zone of the target surface.Type: GrantFiled: February 15, 2019Date of Patent: August 6, 2024Assignee: Braun GmbHInventors: Faiz Feisal Sherman, Xiaole Mao
-
Patent number: 12056348Abstract: In an embodiment, a flick motion is detected on a touch screen interface of a first device. In response to detecting the flick motion: data associated with the flick motion is identified and transmitted to a second device. The data may be automatically displayed in response to detecting the flick motion.Type: GrantFiled: April 17, 2023Date of Patent: August 6, 2024Assignee: TiVo Solutions Inc.Inventor: Robin Hayes
-
Patent number: 12057126Abstract: According to an aspect, a method for distributed sound/image recognition using a wearable device includes receiving, via at least one sensor device, sensor data, and detecting, by a classifier of the wearable device, whether or not the sensor data includes an object of interest. The classifier configured to execute a first machine learning (ML) model. The method includes transmitting, via a wireless connection, the sensor data to a computing device in response to the object of interest being detected within the sensor data, where the sensor data is configured to be used by a second ML model on the computing device or a server computer for further sound/image classification.Type: GrantFiled: October 13, 2020Date of Patent: August 6, 2024Assignee: Google LLCInventors: Alex Olwal, Kevin Balke, Dmitrii Votintcev
-
Patent number: 12043183Abstract: Provided is an audio-video-navigation (AVN) monitor drive unit for a vehicle, which is configured to tilt an AVN monitor coupled to a crash pad in a direction toward a passenger seat. The AVN monitor drive unit includes a monitor assembly, a hinge bracket slidably positioned at a rear of the monitor assembly, and a tilting portion configured to connect the monitor assembly and the hinge bracket to each other and tilt the monitor assembly from the hinge bracket.Type: GrantFiled: December 6, 2021Date of Patent: July 23, 2024Assignee: HYUNDAI MOBIS CO., LTD.Inventors: Yin Peng Xu, Ju Hwan Park, Liu Wei, Wang Zhen
-
Patent number: 12045398Abstract: A point-of-sale device including an adjustable screen, an accelerometer, and a processor is provided. The adjustable screen is arranged in a cashier position or a customer position. The accelerometer generates orientation data corresponding to the adjustable screen. The processor determines, via an accelerometer driver, an adjustable screen position based on the orientation data. The processor then generates, via the accelerometer driver, a position change notification if the adjustable screen changes position. The processor then provides the position change notification and the adjustable screen position to an application selector. The processor then launches, via the application selector upon receiving the position change notification, an application based on the adjustable screen position. In one example, the launched application is a point-of-sale application if the adjustable screen position is the cashier position.Type: GrantFiled: September 27, 2022Date of Patent: July 23, 2024Assignee: VeriFone, Inc.Inventors: Bhanu Raghuraman Aiyar, Rajesh Brahmankar, Kathir Prabhakaran Shanmugam, Kesavan M, Pankaj Raman Mohapatra, Vadirajachar P G, Winston Leong, Pradeep Moka, Harshananda R S, Venkatesh Channa
-
Patent number: 12039155Abstract: A screen content magnification method and device, and a computer readable storage medium. The method includes steps of: starting an magnifier function after a starting instruction of the magnifier function is detected, starting a camera device to acquire an area image of a preset area, and detecting whether a finger image corresponding to the finger of a user exists in the area image; if it is detected that a finger image exists in the area image, determining, on the basis of the finger image, position coordinates of the finger of the user in the area image; and determining size data of a touch screen, determining, on the basis of the size data and the position coordinates, a corresponding position to be magnified, and magnifying, on the basis of the magnifier function, screen content of a position area corresponding to the position.Type: GrantFiled: December 15, 2020Date of Patent: July 16, 2024Assignee: GOERTEK INC.Inventor: Jing Liu
-
Patent number: 12032746Abstract: The technology disclosed relates to a method of realistic displacement of a virtual object for an interaction between a control object in a three-dimensional (3D) sensory space and the virtual object in a virtual space that the control object interacts with. In particular, it relates to detecting free-form gestures of a control object in a three-dimensional (3D) sensory space and generating for display a 3D solid control object model for the control object during the free-form gestures, including sub-components of the control object and in response to detecting a 2D sub-component free-form gesture of the control object in the 3D sensory space in virtual contact with the virtual object, depicting, in the generated display, the virtual contact and resulting rotation of the virtual object by the 3D solid control object model.Type: GrantFiled: July 18, 2022Date of Patent: July 9, 2024Assignee: Ultrahaptics IP Two LimitedInventors: Alex Marcolina, David S. Holz
-
Patent number: 12019801Abstract: Wearable computing apparatuses, which can be adapted to be worn on a user's hand, are provided for augmented reality, virtual reality, and artificial intelligence interactions. Generally, the wearable computing apparatus can include a first subassembly comprising one or more processors, non-transitory memory for storing instructions, at least one haptic motor, and a first set of sensors configured to measure positional characteristics associated with a user's hand. The wearable computing apparatus can further comprise a second subassembly removably coupled to the first subassembly, the second subassembly including a plurality of leads each of which is attached to a finger and comprises a distal portion that houses a haptic motor and a second set of sensors. The second set of sensors is configured to measure positional characteristics associated with the user's fingers.Type: GrantFiled: March 15, 2023Date of Patent: June 25, 2024Inventor: Olaoluwa O. Adesanya
-
Patent number: 12015871Abstract: A set of glasses frames includes electronic components for video capture and is configured to continuously capture video in a twenty-second loop. On the frames is a capture button that, when pressed, causes the electronic circuitry to store the prior twenty seconds and the following twenty seconds, for a total of forty seconds of video in non-transitory memory. The electronic circuitry in the frames also includes a Bluetooth radio and a Wi-Fi radio, enabling the frames to communicate with a mobile device, and more particularly to provide the saved twenty-second video clips to an app running on the mobile device. The app allows for storage of the video clips on the phone, editing of the video clips, upload of the video clips to the Internet, and configuring user-adjustable settings on the electronic circuitry of the glasses.Type: GrantFiled: March 7, 2023Date of Patent: June 18, 2024Inventors: Armaan Saini, Casey Spencer, Vivek Vinodh
-
Patent number: 12010437Abstract: An arrangement determination apparatus for determining an arrangement of a plurality of cameras that is used for generation of a virtual viewpoint image acquires information indicating an imaging target region, and determines an arrangement of the plurality of cameras based on the information in such a manner that images of the entire region of the imaging target region are captured by two or more cameras of the plurality of cameras and images of the entire region of the imaging target region are captured by different two or more cameras of the plurality of cameras that are different from the two or more cameras.Type: GrantFiled: November 3, 2021Date of Patent: June 11, 2024Assignee: Canon Kabushiki KaishaInventors: Michio Aizawa, Yasushi Shikata, Shinichiro Uno, Atsushi Date
-
Patent number: 12000529Abstract: Disclosed is a display apparatus including a display assembly comprising a screen configured to display an image; and a stand connected to the display assembly to support the display assembly. The stand comprises a support bar extending in a front-rear direction, a leg extending upward from the support bar, and a plurality of coupling parts provided on the leg and arranged in the front-rear direction. The display assembly is connected to one of the plurality of coupling parts.Type: GrantFiled: April 14, 2021Date of Patent: June 4, 2024Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Heebong Kim, Dianaminsun Kang, Kyounghwan Kim, Boumsik Kim, Seunghwan Song, Jinyoung Shin, Minhee Lee, Hosuk Chae
-
Patent number: 12001604Abstract: An image analysis unit that detects a user's point of gaze with respect to display information of a display unit on the basis of an image captured by a camera, a user attention information analysis unit that extracts attention information on which a user's point of gaze had been for a prescribed length of time or longer and generates a list including information corresponding to the attention information, and a display information generation unit that generates a display list including list configuration data. The user attention information analysis unit continuously inputs user's point-of-gaze information, and executes sorting processing for setting, in the top of the list, latest attention information or attention information on which the user's point of gaze is for a long length of time. The display information generation unit generates a display list including top data of list configuration data, for example.Type: GrantFiled: September 19, 2019Date of Patent: June 4, 2024Assignee: SONY CORPORATIONInventors: Ryohei Yasuda, Shinichi Kawano
-
Patent number: 11995231Abstract: A system and method of executing code instructions of a gaze detection function system comprising a processor executing code instructions of a software application presenting a graphical user interface (GUI) to a user on a display device, a camera for capturing an image of a user, and the gaze detection function system detecting a location of a user's gaze relative to a display device location based on tracking gaze from the image and generating a gaze detection feedback signal sent, and sending the gaze detection feedback signal to a wireless headset device via a wireless radio system to initiate an audio or haptic feedback at the wireless headset device to confirm that the display device location has been identified from the detection of the user's gaze.Type: GrantFiled: July 21, 2022Date of Patent: May 28, 2024Assignee: DELL PRODUCTS LPInventors: Deeder M. Aurongzeb, Peng Lip Goh
-
Patent number: 11995792Abstract: Computer-implemented system and method that allows for the rehabilitation of individuals with strabismus by leveraging commercially available VR headset technology. Eye tracking capabilities and external cameras of existing VR headsets, enables the system and method to first gather the image that the dominant, unaffected eye is focused on (FIG. 1). Then, given where the strabismus unaffected eye's gaze falls on its screen, the system and method will take the image and identify where that point lies on the strabismus affected eye's screen. This point is the ideal gaze point of the strabismus affected eye. The software solution disclosed herein will then warp the strabismus affected eye's screen so that this new gaze will line up with the unaffected eye's gaze.Type: GrantFiled: September 23, 2021Date of Patent: May 28, 2024Inventors: Christopher Pratt Foley, Jr., Carlie Madison Bolling, David Michael Guardiola, Gabriel Khalfani Smith, Katrina Kera Pham
-
Patent number: 11985391Abstract: A display apparatus includes a display; a remote controller; a controller. The controller is configured to receive a first command for fitness training, cause the display to present one or more modes including a follow-up mode and an exercising-while-watching mode for selection; in response to selection of the follow-up mode, cause the display to show a first window and a second window disposed on a first user interface for receiving a focus move command from the remote control, with the first window displaying a first video associated with fitness training and the second window displaying a second video associated with image data from a camera; receive a volume adjustment command; in response to the focus being on a volume control for the first or second video, adjust a volume of the first or second video in response to the volume adjustment command. A method for controlling the apparatus is disclosed.Type: GrantFiled: September 28, 2022Date of Patent: May 14, 2024Assignee: JUHAOKAN TECHNOLOGY CO., LTD.Inventors: Xiaokui Qu, Guangqiang Wang, Jing Ning, Xiaozong Chen, Chao Wu, Jingang Liu, Jiayi Ding
-
Patent number: 11977688Abstract: A virtual reality system (100) and device for use therein involve an optical detection arrangement (200) configured to detect a light pattern associated with a unique identity, and a tool (300) having a plurality of light-emitting devices (600) engaged therewith. The light-emitting devices (600) are selectively user-actuatable between a plurality of light patterns detectable by the optical detection arrangement (200). The light patterns provide a plurality of unique identities of the tool (300) as detected by the optical detection arrangement (200).Type: GrantFiled: May 8, 2019Date of Patent: May 7, 2024Assignee: DREAMSCAPE IMMERSIVE, INC.Inventors: Christopher Purvis, Michael Fusco
-
Patent number: 11977672Abstract: Systems and techniques are described herein for generating and/or processing virtual content in a distributed system (e.g., a distributed extended reality system). For example, a first device (e.g., a client device) of a distributed extended reality system may transmit one or more sets of pose prediction parameters (e.g., prediction coefficients, prediction time associated with raw pose data, and/or the raw pose data) to a second device (e.g., a server device) of the distributed extended reality system. The second device may predict one or more poses of the second device based on the set(s) of pose prediction parameters, and may generate virtual content based on a pose of the predicted pose(s) of the first device. The second may transmit and the first device may receive the virtual content. The first device may then display one or more virtual objects based at least in part on the received virtual content.Type: GrantFiled: June 3, 2022Date of Patent: May 7, 2024Assignee: QUALCOMM IncorporatedInventors: Sriram Ajaykumar, Vinay Melkote Krishnaprasad, Arjun Sitaram
-
Patent number: 11979511Abstract: A terminal device according to an embodiment includes: a microphone; a communication unit that performs communication via a network; a first sensor that obtains depth information; a first face detecting unit that detects a face and the distance to the face based on the depth information; and a processor. When a communication program, which is meant for making the processor perform transmission and reception of messages including voice messages and which has a sound input mode for enabling collection of sounds using the microphone and transmission of voice messages based on the collected sounds, is executed, and when a message is received by the communication unit under the control of the communication program being executed; the processor performs first-type face detection using the first face detecting unit and controls turning ON and turning OFF the sound input mode according to the result of the first-type face detection.Type: GrantFiled: August 2, 2019Date of Patent: May 7, 2024Assignee: Sony Group CorporationInventors: Mikio Ishimaru, Kenta Endo
-
Patent number: 11972853Abstract: The present disclosure generally relates to computer user interfaces, and more specifically to techniques for presenting activity trends and managing workouts.Type: GrantFiled: September 23, 2022Date of Patent: April 30, 2024Assignee: Apple Inc.Inventors: Aled H. Williams, Julie A. Arney, Matthew J. Sundstrom, Molly P. Wiebe
-
Patent number: 11971754Abstract: Embodiments are generally directed to a flexible overlapping display. An embodiment of a mobile device includes a processor to process data for the mobile device, a bendable and foldable display screen, one or more device sensors to sense an orientation of the mobile device, and one or more display sensors to sense a current arrangement of the display screen. The processor is to identify one or more portions of the display screen that are visible to a user based at least in part on data from the one or more device sensors and the one or more display sensors.Type: GrantFiled: January 27, 2023Date of Patent: April 30, 2024Assignee: Intel CorporationInventors: Guy M. Therien, David W. Browning, Joshua L. Zuniga
-
Patent number: 11972067Abstract: A system for promoting user manipulation of pointing device while operating keyboard is provided. The system comprises a computer, a USB device coupled to the computer, and a handheld wireless pointing device with a first surface positioned against a palm of a user's hand. The device receives tactile contact from a thumb of the hand on a touchpad on a second surface, the second surface opposite the first surface and facing away from the palm. The device also measures movement of the thumb on the second surface and transmits messaging to the USB regarding the measured movement. The USB, based on the received messaging, directs movement of a cursor on a display of the computer. The movement of the cursor is aligned with the movement of the thumb. The device receives attachment to the hand via loops attached to user index and middle fingers.Type: GrantFiled: January 8, 2023Date of Patent: April 30, 2024Inventor: Tiffany Cruz
-
Patent number: 11972046Abstract: The present disclosure discloses a human-machine interaction method and system based on eye movement tracking. The human-machine interaction method comprises the following steps: acquiring first eye movement information of a user on a current display interface; executing a first operation command on the current display interface based on the first eye movement information; acquiring second eye movement information of the user on the current display interface under the first operation command; executing a second operation command on the current display interface under the first operation command based on the second eye movement information; acquiring third eye movement information of the user on the current display interface under the second operation command; locking or unlocking the second operation command based on the third eye movement information; and repeating the above process until the user finishes human-machine interaction.Type: GrantFiled: January 18, 2023Date of Patent: April 30, 2024Assignees: Vantronics (Hangzhou) Intelligent Technology Co. LTD.Inventors: Hailiang Han, Vincent Jiang