Patents by Inventor Christopher B. Fleizach

Christopher B. Fleizach has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250037033
    Abstract: A device implementing a system for machine-learning based gesture recognition includes at least one processor configured to, receive, sensor data for a first window of time and additional sensor data for a second window of time overlapping the first window of time. The sensor data and the additional sensor data are provided as inputs to a machine learning model, the machine learning model having been trained to output a predicted gesture, predicted gesture start time, and predicted gesture end time based on the sensor data. A predicted gesture is determined based on an output from the machine learning model, and to perform, in response to determining the predicted gesture, a predetermined action on the device.
    Type: Application
    Filed: October 14, 2024
    Publication date: January 30, 2025
    Inventors: Charles MAALOUF, Shawn R. SCULLY, Christopher B. FLEIZACH, Tu K. NGUYEN, Lilian H. LIANG, Warren J. SETO, Julian QUINTANA, Michael J. BEYHS, Hojjat SEYED MOUSAVI, Behrooz SHAHSAVARI
  • Patent number: 12189865
    Abstract: The present disclosure generally relates to navigating user interfaces using hand gestures.
    Type: Grant
    Filed: February 14, 2023
    Date of Patent: January 7, 2025
    Assignee: Apple Inc.
    Inventors: Tu K. Nguyen, James N. Cartwright, Elizabeth C. Cranfill, Christopher B. Fleizach, Joshua R. Ford, Jeremiah R. Johnson, Charles Maalouf, Heriberto Nieto, Jennifer D. Patton, Hojat Seyed Mousavi, Shawn R. Scully, Ibrahim G. Yusuf, Joanna Arreaza-Taylor, Hannah G. Coleman, Yoonju Han
  • Publication number: 20240428539
    Abstract: While a first view of a three-dimensional environment is visible, a computer system detects a first input meeting selection criteria. If, when the first input was detected, a user was directing attention to a first portion of the first view that has a spatial relationship to a viewport through which the three-dimensional environment is visible, the computer system displays a user interface object including affordances for accessing functions of the computer system; otherwise, the computer system forgoes displaying the user interface object. While a different view of the three-dimensional environment is visible, the computer system detects a second input meeting the selection criteria. If, when the second input was detected, the user was directing attention to a second portion of the different view that has the same spatial relationship to the viewport, the computer system displays the user interface object; otherwise, the computer system forgoes displaying the user interface object.
    Type: Application
    Filed: May 29, 2024
    Publication date: December 26, 2024
    Inventors: Matan Stauber, Israel Pastrana Vicente, Stephen O. Lemay, William A. Sorrentino, III, Zoey C. Taylor, Kristi E. Bauerly, Daniel M. Golden, Christopher B. Fleizach, Evgenii Krivoruchko, Amy E. DeDonato
  • Patent number: 12175070
    Abstract: Some embodiments described in this disclosure are directed to a first electronic device that operates in a remote interaction mode with a second electronic device, where user interactions with images displayed on the first electronic device cause the second electronic device to update display of the images and/or corresponding user interfaces on the second electronic device.
    Type: Grant
    Filed: May 17, 2023
    Date of Patent: December 24, 2024
    Assignee: Apple Inc.
    Inventors: Christopher B. Fleizach, Tu K. Nguyen, Virata Yindeeyoungyeon
  • Publication number: 20240386877
    Abstract: The present disclosure generally relates to techniques and interfaces for generating synthesized speech outputs. For example, a user interface for a text-to-speech service can include ranked and/or categorized phrases, which can be selected to enter as text. A synthesized speech output is then generated to deliver any entered text, for example, using a personalized voice model.
    Type: Application
    Filed: November 22, 2023
    Publication date: November 21, 2024
    Inventors: Eric D. SCHLAKMAN, Cooper BARTH, James N. CARTWRIGHT, Ian M. FISCH, Christopher B. FLEIZACH, Gregory F. HUGHES, Areeba KAMAL, Grant P. MALONEY, Darren C. MINIFIE, Christopher J. ROMNEY, Eric T. SEYMOUR, Margarita ZAKIROVA
  • Publication number: 20240386716
    Abstract: The present disclosure generally relates to detecting text. The present disclosure describes at least methods for managing a text detection mode, identifying targeted text, and managing modes of a computer system.
    Type: Application
    Filed: March 5, 2024
    Publication date: November 21, 2024
    Inventors: Allison LETTIERE, Christopher B. FLEIZACH, Darren C. MINIFIE, Bianca J. YACOUB, Nandini KANNAMANGALAM SUNDARA RAMAN, Elizabeth A. OTTENS, Giuseppe PARZIALE
  • Publication number: 20240357002
    Abstract: The present disclosure generally relates to communicating between computer systems, and more specifically to techniques for communicating user interface content.
    Type: Application
    Filed: October 11, 2023
    Publication date: October 24, 2024
    Inventors: Shardul OZA, Vikrant KASARABADA, Tu K. NGUYEN, Virata YINDEEYOUNGYEON, Gennadiy SHEKHTMAN, Christopher B. FLEIZACH
  • Patent number: 12118443
    Abstract: A device implementing a system for machine-learning based gesture recognition includes at least one processor configured to, receive, from a first sensor of the device, first sensor output of a first type, and receive, from a second sensor of the device, second sensor output of a second type that differs from the first type. The at least one processor is further configured to provide the first sensor output and the second sensor output as inputs to a machine learning model, the machine learning model having been trained to output a predicted gesture based on sensor output of the first type and sensor output of the second type. The at least one processor is further configured to determine the predicted gesture based on an output from the machine learning model, and to perform, in response to determining the predicted gesture, a predetermined action on the device.
    Type: Grant
    Filed: May 26, 2023
    Date of Patent: October 15, 2024
    Assignee: Apple Inc.
    Inventors: Charles Maalouf, Shawn R. Scully, Christopher B. Fleizach, Tu K. Nguyen, Lilian H. Liang, Warren J. Seto, Julian Quintana, Michael J. Beyhs, Hojjat Seyed Mousavi, Behrooz Shahsavari
  • Patent number: 12099715
    Abstract: The present disclosure generally relates to exploring a geographic region that is displayed in computer user interfaces. In some embodiments, a method includes at an electronic device with a display and one or more input devices, displaying a map of a geographic region on the display and detecting a first user input to select a starting location on the map. After detecting the first user input, the method includes detecting a second user input to select a first direction of navigation from the starting location. In response to detecting the second user input, the method includes determining a path on the map that traverses in the first direction of navigation and connects the starting location to an ending location, and providing audio that includes traversal information about traversing along the path in the geographic region in the first direction of navigation and from the starting location to the ending location.
    Type: Grant
    Filed: January 30, 2023
    Date of Patent: September 24, 2024
    Assignee: Apple Inc.
    Inventors: Christopher B. Fleizach, Michael A. Troute, Reginald D. Hudson, Aaron M. Everitt, Conor M. Hughes
  • Publication number: 20240201842
    Abstract: An electronic device with a display and a touch-sensitive surface displays, on the display, a first visual indicator. The electronic device receives a first single touch input on the touch-sensitive surface at a location that corresponds to the first visual indicator; and, in response to detecting the first single touch input on the touch-sensitive surface at a location that corresponds to the first visual indicator, replaces display of the first visual indicator with display of a first menu. The first menu includes a virtual touches selection icon. In response to detecting selection of the virtual touches selection icon, the electronic device displays a menu of virtual multitouch contacts.
    Type: Application
    Filed: February 23, 2024
    Publication date: June 20, 2024
    Inventors: Eric T. SEYMOUR, Christopher B. FLEIZACH
  • Patent number: 11979656
    Abstract: An electronic device with a camera obtains, with the camera, one or more images of a scene. The electronic device detects a respective feature within the scene. In accordance with a determination that a first mode is active on the device, the electronic device provides a first audible description of the scene. The first audible description provides information indicating a size and/or position of the respective feature relative to a first set of divisions applied to the one or more images of the scene. In accordance with a determination that the first mode is not active on the device, the electronic device provides a second audible description of the plurality of objects. The second audible description is distinct from the first audible description and does not include the information indicating the size and/or position of the respective feature relative to the first set of divisions.
    Type: Grant
    Filed: November 2, 2022
    Date of Patent: May 7, 2024
    Assignee: APPLE INC.
    Inventors: Christopher B. Fleizach, Darren C. Minifie, Eryn R. Wells, Nandini Kannamangalam Sundara Raman
  • Patent number: 11947792
    Abstract: An electronic device with a display and a touch-sensitive surface displays, on the display, a first visual indicator. The electronic device receives a first single touch input on the touch-sensitive surface at a location that corresponds to the first visual indicator; and, in response to detecting the first single touch input on the touch-sensitive surface at a location that corresponds to the first visual indicator, replaces display of the first visual indicator with display of a first menu. The first menu includes a virtual touches selection icon. In response to detecting selection of the virtual touches selection icon, the electronic device displays a menu of virtual multitouch contacts.
    Type: Grant
    Filed: October 19, 2020
    Date of Patent: April 2, 2024
    Assignee: Apple Inc.
    Inventors: Eric T. Seymour, Christopher B. Fleizach
  • Publication number: 20240103714
    Abstract: An electronic device with a display and a touch-sensitive surface displays, on the display, a first visual indicator that corresponds to a virtual touch. The device receives a first input from an adaptive input device. In response to receiving the first input from the adaptive input device, the device displays a first menu on the display. The first menu includes a virtual touches selection icon. In response to detecting selection of the virtual touches selection icon, a menu of virtual multitouch contacts is displayed.
    Type: Application
    Filed: October 2, 2023
    Publication date: March 28, 2024
    Inventors: Christopher B. FLEIZACH, Eric T. SEYMOUR, James P. CRAIG
  • Publication number: 20240061547
    Abstract: While a view of a three-dimensional environment is visible via a display generation component of a computer system, the computer system receives, from a user, one or more first user inputs corresponding to selection of a respective direction in the three-dimensional environment relative to a reference point associated with the user, and displays a ray extending from the reference point in the respective direction. While displaying the ray, the system displays a selection cursor moving along the ray independently of user input. When the selection cursor is at a respective position along the ray, the system receives one or more second user inputs corresponding to a request to stop the movement of the selection cursor along the ray and, in response, sets a target location for a next user interaction to a location in the three-dimensional environment that corresponds to the respective position of the selection cursor along the ray.
    Type: Application
    Filed: August 14, 2023
    Publication date: February 22, 2024
    Inventors: Christopher B. Fleizach, James T. Turner, Daniel M. Golden, Kristi E.S. Bauerly, John M. Nefulda
  • Publication number: 20230409194
    Abstract: Some embodiments described in this disclosure are directed to a first electronic device that operates in a remote interaction mode with a second electronic device, where user interactions with images displayed on the first electronic device cause the second electronic device to update display of the images and/or corresponding user interfaces on the second electronic device.
    Type: Application
    Filed: May 17, 2023
    Publication date: December 21, 2023
    Inventors: Christopher B. FLEIZACH, Tu K. NGUYEN, Virata YINDEEYOUNGYEON
  • Publication number: 20230393865
    Abstract: The present disclosure generally relates to activating and managing dual user interface modes for an operating system. An electronic device configured to run an operating system in a first mode including a first user interface receives, while operating in the first user interface mode, a predefined sequence of inputs, and in response to receiving the predefined sequence of inputs, transitions the operating system from the first user interface mode to a second user interface mode. The second user interface mode has different input capabilities with respect to the first user interface mode.
    Type: Application
    Filed: June 1, 2023
    Publication date: December 7, 2023
    Applicant: APPLE INC.
    Inventors: Christopher B. FLEIZACH, Christopher J. ROMNEY, Clare T. KASEMSET, Joaquim Goncalo LOBO FERREIRA DA SILVA, Eric T. SEYMOUR, Isis Naomi RAMÍREZ MOLERES, Allen WHEARRY JR., Adrian T. CHAMBERS, Margarita ZAKIROVA
  • Publication number: 20230376266
    Abstract: In some embodiments, an computer system detects objects, such as physical objects in the physical environment of the electronic device. In some embodiments, the computer system presents indications of characteristics of the physical objects. In some embodiments, the physical objects are entry points to physical locations.
    Type: Application
    Filed: May 17, 2023
    Publication date: November 23, 2023
    Inventors: Christopher B. FLEIZACH, Allison LETTIERE, Cole A. GLEASON, Darren C. MINIFIE, Nandini KANNAMANGALAM SUNDARA RAMAN, Ryan N. DOUR
  • Publication number: 20230376193
    Abstract: The present disclosure generally relates to displaying user interfaces with device controls.
    Type: Application
    Filed: May 15, 2023
    Publication date: November 23, 2023
    Inventors: Elizabeth HAN, Joanna ARREAZA-TAYLOR, Hannah G. COLEMAN, Caroline J. CRANDALL, Christopher B. FLEIZACH, Charles MAALOUF, Tu K. NGUYEN, Jennifer D. PATTON
  • Publication number: 20230375359
    Abstract: In some embodiments, a computer system presents navigation directions from a first location to a second location. In some embodiments, if the current location of the computer system is different from the first location, the computer system presents audio and/or tactile feedback indicating distance and/or direction to the first location.
    Type: Application
    Filed: May 17, 2023
    Publication date: November 23, 2023
    Inventors: Christopher B. FLEIZACH, Allen WHEARRY, JR., Ryan N. DOUR
  • Publication number: 20230325719
    Abstract: A device implementing a system for machine-learning based gesture recognition includes at least one processor configured to, receive, from a first sensor of the device, first sensor output of a first type, and receive, from a second sensor of the device, second sensor output of a second type that differs from the first type. The at least one processor is further configured to provide the first sensor output and the second sensor output as inputs to a machine learning model, the machine learning model having been trained to output a predicted gesture based on sensor output of the first type and sensor output of the second type. The at least one processor is further configured to determine the predicted gesture based on an output from the machine learning model, and to perform, in response to determining the predicted gesture, a predetermined action on the device.
    Type: Application
    Filed: May 26, 2023
    Publication date: October 12, 2023
    Inventors: Charles MAALOUF, Shawn R. SCULLY, Christopher B. FLEIZACH, Tu K. NGUYEN, Lilian H. LIANG, Warren J. SETO, Julian QUINTANA, Michael J. BEYHS, Hojjat SEYED MOUSAVI, Behrooz SHAHSAVARI