Patents by Inventor Alex Olwal
Alex Olwal has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12271156Abstract: Aspects of the disclosure provide a hybrid smartwatch that incorporates digital technology with an analog timepiece in a wristwatch form factor. A digital display layer of a non-emissive material is configured to present notices, data, content and other information. An analog display layer includes one or more hands of the timepiece, and overlies the digital display layer. The hands may be controlled by a processor through micro-stepper motors or other actuators. Physical motion of the hands is simultaneously coupled with arrangement of content or other elements on the display layer, which results in numerous types of hybrid visualizations. This includes temporal presentations using hourly, daily, monthly or other time scales. Shape-shifting of the watch hands between 2D and 1D arrangements can linearly focus on certain information. And various content-aware layouts can highlight, bracket, occlude or otherwise emphasize or deemphasize displayed information.Type: GrantFiled: September 14, 2021Date of Patent: April 8, 2025Assignee: Google LLCInventors: Alex Olwal, Philip Dam Roadley-Battin, Tyler Gough
-
Publication number: 20250076819Abstract: Aspects of the technology provide a symbiotic graphical display on a client device such as a smartwatch. The system includes at least one emissive display element and at least one non-emissive display element. The display elements are arrayed in layers or other configurations such that content or other information is concurrently aligned across the respective display surfaces of the different elements. A first set of content is rendered using the non-emissive display element while a second set of content is rendered using the emissive display element. Depending on characteristics or features of a given content item, that item may be rendered by one or both of the display elements. Certain content may be transitioned from the emissive display element to the non-emissive display element according to a time threshold or other criteria.Type: ApplicationFiled: October 11, 2024Publication date: March 6, 2025Inventor: Alex Olwal
-
Publication number: 20250080904Abstract: The present disclosure provides computer-implemented methods, systems, and devices for capturing spatial sound for an environment. A computing system captures, using two or more microphones, audio data from an environment around a mobile device. The computing system analyzes the audio data to identify a plurality of sound sources in the environment around the mobile device based on the audio data. The computing system determines, based on characteristics of the audio data and data produced by one or more movement sensors, an estimated location for each respective sound source in the plurality of sound sources. The computing system generates a spatial sound recording of the audio data based, at least in part, on the estimated location of each respective sound source in the plurality of sound sources.Type: ApplicationFiled: September 1, 2023Publication date: March 6, 2025Inventors: Artem Dementyev, Richard Francis Lyon, Pascal Tom Getreuer, Alex Olwal, Dmitrii Nikolayevitch Votintcev
-
Publication number: 20250054246Abstract: A user can interact with sounds and speech in an environment using an augmented reality device. The augmented reality device can be configured to identify objects in the environment and display messages beside the object that are related to sounds produced by the object. For example, the messages may include sound statistics, transcripts of speech, and/or sound detection events. The disclosed approach enables a user to interact with these messages using a gaze and a gesture.Type: ApplicationFiled: October 14, 2022Publication date: February 13, 2025Inventors: Ruofei Du, Alex Olwal
-
Patent number: 12167082Abstract: Systems and methods are related to tracking an attention of a user with respect to content presented on a virtual screen, detecting a defocus event associated with a first region of the content, and determining a next focus event associated with a second region of the content. The determination can be based at least in part on the defocus event and on the tracked attention of the user. The systems and methods can include generating, based on the determined next focus event, a marker for differentiating the second region of the content from a remainder of the content, and in response to detecting a refocus event associated with the virtual screen, triggering execution of the marker associated with the second region of the content.Type: GrantFiled: September 20, 2022Date of Patent: December 10, 2024Assignee: Google LLCInventors: Alex Olwal, Ruofei Du
-
Publication number: 20240406250Abstract: A method may receive a first audio signal data at a first device, the first audio signal data decoded using a microphone array. A method may determine that the first audio signal data includes an audio sequence relating to an information sharing request. A method may send a first signal. A method may receive a second signal via the microphone array responsive to the first signal. A method may verify that the second signal was sent from a direction of a voice associated with the audio sequence, the second signal including a second device identifier associated with a second device. A method may establish a wireless connection with the second device using the second device identifier.Type: ApplicationFiled: June 7, 2024Publication date: December 5, 2024Inventors: Alex Olwal, Artem Dementyev
-
Publication number: 20240380981Abstract: Systems and methods are disclosed that address the need for adaptive exposure within high dynamic range (HDR) images. Solutions can leverage recent advances in the use of virtual reality (VR) headsets and Augmented Reality (AR) displays equipped with infrared (IR) eye tracking devices. A gaze vector determined by the eye tracking device identifies one or more fixation points on the image that corresponds to an area where there exists a faulty exposure. The exposure around the fixation point can be adaptively corrected using image processing techniques. Using spatial adaptive exposure, the resulting image, a type of foveated image, can be rendered on a low dynamic range (LDR) display with sufficient detail.Type: ApplicationFiled: May 12, 2023Publication date: November 14, 2024Inventors: Ruofei Du, Alex Olwal
-
Patent number: 12140910Abstract: Aspects of the technology provide a symbiotic graphical display on a client device such as a smartwatch. The system includes at least one emissive display element and at least one non-emissive display element. The display elements are arrayed in layers or other configurations such that content or other information is concurrently aligned across the respective display surfaces of the different elements. A first set of content is rendered using the non-emissive display element while a second set of content is rendered using the emissive display element. Depending on characteristics or features of a given content item, that item may be rendered by one or both of the display elements. Certain content may be transitioned from the emissive display element to the non-emissive display element according to a time threshold or other criteria.Type: GrantFiled: October 4, 2022Date of Patent: November 12, 2024Assignee: Google LLCInventor: Alex Olwal
-
Patent number: 12142185Abstract: Subsurface display interfaces and associated systems and methods are provided. In some embodiments, an example method can include: dividing a graphic to be displayed into a plurality of sets of primitives, each primitive of the sets of primitives comprising a rectangle or a line; assigning the plurality of sets of primitives to display frames of a plurality of display frames; and outputting, at a display positioned under a surface and in series, the plurality of display frames.Type: GrantFiled: January 5, 2022Date of Patent: November 12, 2024Assignee: Google LLCInventors: Alex Olwal, Artem Dementyev
-
Publication number: 20240330362Abstract: Methods and devices are provided where a device may receive audio data via a sensor of a computing device. The device may convert the audio data to text and extract a portion of the text. The device may input the portion of the text to a neural network-based language model to obtain at least one of a type of visual images, a source of the visual images, a content of the visual images, or a confidence score for the visual images. The device may determine at least one visual image based on at least one of the type of the visual images, the source of the visual images, the content of the visual images, or the confidence score for each of the visual images. The at least one visual image may be output on a display of the computing device to supplement the audio data and facilitate a communication.Type: ApplicationFiled: October 25, 2022Publication date: October 3, 2024Inventors: Ruofei Du, Alex Olwal, Xingyu Liu
-
Publication number: 20240303918Abstract: A method can include receiving, via a camera, a first video stream of a face of a user; determining a location of the face of the user based on the first video stream and a facial landmark detection model; receiving, via the camera, a second video stream of the face of the user; generating a depth map based on the second video stream, the location of the face of the user, and a depth prediction model; and generating a representation of the user based on the depth map and the second video stream.Type: ApplicationFiled: October 11, 2023Publication date: September 12, 2024Inventors: Ruofei Du, Xun Qian, Yinda Zhang, Alex Olwal
-
Patent number: 12057126Abstract: According to an aspect, a method for distributed sound/image recognition using a wearable device includes receiving, via at least one sensor device, sensor data, and detecting, by a classifier of the wearable device, whether or not the sensor data includes an object of interest. The classifier configured to execute a first machine learning (ML) model. The method includes transmitting, via a wireless connection, the sensor data to a computing device in response to the object of interest being detected within the sensor data, where the sensor data is configured to be used by a second ML model on the computing device or a server computer for further sound/image classification.Type: GrantFiled: October 13, 2020Date of Patent: August 6, 2024Assignee: Google LLCInventors: Alex Olwal, Kevin Balke, Dmitrii Votintcev
-
Patent number: 12026859Abstract: A method including receiving, by a companion device from an optical display, a distortion information associated with a geometric distortion associated with rendering an image on a display of the optical display, distorting, by the companion device, the image using the distortion information, preprocessing, by the companion device, the distorted image based on compression artifacts, compressing, by the companion device, the distorted image, and communicating, by the companion device, the compressed image to the optical displayType: GrantFiled: December 9, 2021Date of Patent: July 2, 2024Assignee: Google LLCInventors: Hendrik Wagenaar, Alex Olwal
-
Patent number: 12015663Abstract: A method may receive a first audio signal data at a first device, the first audio signal data decoded using a microphone array. A method may determine that the first audio signal data includes an audio sequence relating to an information sharing request. A method may send a first signal. A method may receive a second signal via the microphone array responsive to the first signal. A method may verify that the second signal was sent from a direction of a voice associated with the audio sequence, the second signal including a second device identifier associated with a second device. A method may establish a wireless connection with the second device using the second device identifier.Type: GrantFiled: January 27, 2023Date of Patent: June 18, 2024Assignee: Google LLCInventors: Alex Olwal, Artem Dementyev
-
Patent number: 12008204Abstract: This document describes techniques directed to a scalable gesture sensor for wearable and soft electronic devices. The scalable gesture sensor is integrated into an object such as a wearable garment or a large-surface embedded system to provide a touch-sensitive surface for the object. The sensor includes a repeated localized crossover pattern formed by the same few sensor lines, resulting in the same two conductive lines having multiple crossover points across the touch-sensitive surface. The repeated crossover pattern enables detection of the occurrence and relative direction of a swipe gesture based at least on a repeated sequence of capacitance changes over a set of conductive lines in the repeated crossover pattern. Also, the scalable gesture sensor is computationally simple, uses low power, and is uniquely scalable to cover a large area with few electrodes.Type: GrantFiled: October 4, 2021Date of Patent: June 11, 2024Assignee: Google LLCInventors: Alex Olwal, Thad Eugene Starner
-
Patent number: 11967335Abstract: An augmented reality (AR) device, such as AR glasses, may include a microphone array. The sensitivity of the microphone array can be directed to a target by beamforming, which includes combining the audio of each microphone of the array in a particular way based on a location of the target. The present disclosure describes systems and methods to determine the location of the target based on a gaze of a user and beamform the audio accordingly. This eye-tracked beamforming (i.e., foveated beamforming) can be used by AR applications to enhance sounds from a gaze direction and to suppress sounds from other directions. Additionally, the gaze information can be used to help visualize the results of an AR application, such as speech-to-text.Type: GrantFiled: September 3, 2021Date of Patent: April 23, 2024Assignee: Google LLCInventors: Ruofei Du, Hendrik Wagenaar, Alex Olwal
-
Publication number: 20240073571Abstract: A method of generating a virtual microphone array according including identifying a plurality of microphones, identifying a relative position in space of each of the plurality of microphones, generating a virtual microphone array based on the plurality of microphones and the relative position in space of each of the plurality of microphones, sensing audio at each of the plurality of microphones, and generating an audio signal of the virtual microphone array based on the sensed audio.Type: ApplicationFiled: August 31, 2023Publication date: February 29, 2024Inventors: Clayton Woodward Bavor, JR., Alex Olwal
-
Patent number: 11868583Abstract: Systems and methods are provided in which physical objects in the ambient environment can function as user interface implements in an augmented reality environment. A physical object detected within a field of view of a camera of a computing device may be designated as a user interface implement in response to a user command. User interfaces may be attached to the designated physical object, to provide a tangible user interface implement for user interaction with the augmented reality environment.Type: GrantFiled: March 28, 2022Date of Patent: January 9, 2024Assignee: Google LLCInventors: Ruofei Du, Alex Olwal, Mathieu Simon Le Goc, David Kim, Danhang Tang
-
Publication number: 20230367960Abstract: A method performed by a computing system comprises generating text from audio data and determining an end portion of the text to include in a summarization of the text based on a length of a portion of the audio data from which the text was generated and which ends with a proposed end portion and a time value associated with the proposed end portion, the proposed end portion including a word from the text.Type: ApplicationFiled: May 10, 2023Publication date: November 16, 2023Inventors: Boris Smus, Vikas Bahirwani, Ruofei Du, Christopher Ross, Alex Olwal
-
Publication number: 20230341704Abstract: An eyewear includes an adjustable nose bridge adapted to interconnect two half-frames of an eyeglasses frame. The two half-frames are adapted to hold a pair of see-through lenses. A virtual display is embedded in, or overlaid on, at least one of the pair of see-through lenses. The eyeglasses frame utilizes the adjustable nose bridge over a person's nose and temple pieces that rest over the person's ears to hold the pair of see-through lenses in position in front of the person's eyes. The virtual display has an associated eye box for full viewing of the virtual display by the person. The adjustable nose bridge includes one or more bridge-frame fastener arrangements adapted to provide independent adjustments of one or more geometrical parameters of the eyeglasses frame for aligning optics of the eyeglasses frame and the eye box to the user's face and eye geometry.Type: ApplicationFiled: September 29, 2020Publication date: October 26, 2023Inventors: Alex Olwal, Dmitrii Votintcev