Patents by Inventor Alexander Li Honda

Alexander Li Honda has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11958198
    Abstract: A lab automation system receives an instruction from a user to perform a protocol within a lab via an interface including a graphical representation of the lab. The lab includes a robot and set of equipment rendered within the graphical representation of the lab. The lab automation system identifies an ambiguous term of the instruction and pieces of equipment corresponding to the ambiguous term and modifies the interface to include a predictive text interface element listing the pieces of equipment. Upon a mouseover of a listed piece of equipment within the predictive text interface element, the lab automation system modifies the graphical representation of the lab to highlight the listed piece of equipment corresponding to the mouseover. Upon a selection of the listed piece of equipment within the predictive text interface element, the lab automation system modifies the instruction to include the listed piece of equipment.
    Type: Grant
    Filed: August 2, 2021
    Date of Patent: April 16, 2024
    Assignee: Artificial, Inc.
    Inventors: Jeff Washington, Geoffrey J. Budd, Nikhita Singh, Jake Sganga, Alexander Li Honda
  • Patent number: 11919174
    Abstract: A lab system identifies a set of steps associated with a protocol for a lab meant to be performed by a robot within the lab using equipment and reagents. The lab system renders, within a user interface, a virtual representation of the lab, a virtual robot, and virtual equipment and reagents. Responsive to operating in a first mode, the lab system simulates the identified set of steps identify virtual positions of the virtual robot within the lab as the virtual robot performs the steps and modifies the virtual representation of the lab to mirror the identified positions of the virtual robot in real-time. Responsive to operating in a second mode, the lab system identifies positions of the robot within the lab as the robot performs the identified set of steps and modifies the virtual representation of the lab to mirror the identified positions of the robot in real-time.
    Type: Grant
    Filed: August 2, 2021
    Date of Patent: March 5, 2024
    Assignee: Artificial, Inc.
    Inventors: Jeff Washington, Geoffrey J. Budd, Nikhita Singh, Jake Sganga, Alexander Li Honda
  • Patent number: 11897144
    Abstract: A lab system accesses a first protocol for performance by a first robot in a first lab. The first protocol includes a set of steps, each associated with an operation, reagent, and equipment. For each of one or more steps, the lab system modifies the step by: (1) identifying one or more replacement operations that achieve an equivalent or substantially similar result as a performance of the operation, (2) identifying replacement equipment that operates substantially similarly to the equipment, and/or (3) identifying one or more replacement reagents that, when substituted for the reagent, do not substantially affect the performance of the step. The lab system generates a modified protocol by replacing one or more of the set of steps with the modified steps. The lab system selects a second lab including a second and configures the second robot to perform the modified protocol in the second lab.
    Type: Grant
    Filed: August 2, 2021
    Date of Patent: February 13, 2024
    Assignee: Artificial, Inc.
    Inventors: Jeff Washington, Geoffrey J. Budd, Nikhita Singh, Jake Sganga, Alexander Li Honda
  • Publication number: 20220043561
    Abstract: A lab automation system receives an instruction from a user to perform a protocol within a lab via an interface including a graphical representation of the lab. The lab includes a robot and set of equipment rendered within the graphical representation of the lab. The lab automation system identifies an ambiguous term of the instruction and pieces of equipment corresponding to the ambiguous term and modifies the interface to include a predictive text interface element listing the pieces of equipment. Upon a mouseover of a listed piece of equipment within the predictive text interface element, the lab automation system modifies the graphical representation of the lab to highlight the listed piece of equipment corresponding to the mouseover. Upon a selection of the listed piece of equipment within the predictive text interface element, the lab automation system modifies the instruction to include the listed piece of equipment.
    Type: Application
    Filed: August 2, 2021
    Publication date: February 10, 2022
    Inventors: Jeff Washington, Geoffrey J. Budd, Nikhita Singh, Jake Sganga, Alexander Li Honda
  • Publication number: 20220040853
    Abstract: A lab system configures robots to performs protocols in labs. The lab automation system receives, via a user interface, an instruction from a user to perform a protocol within a lab. The instruction may comprise text, and the lab may comprise a robot configured to perform the protocol. The lab system converts, using a machine learned model, the text into steps and, for each step, identifies one or more of an operation, lab equipment, and reagent associated with the step. In response to detecting an ambiguity/error associated with the step, the lab system notifies the user via the user interface of the ambiguity/error. The lab system may receive one or more indications from the user that resolve the ambiguity/error and update the associated steps. For each step, the lab system configures the robot to perform an identified operation, interact with identified lab equipment, and/or access/use an identified reagent.
    Type: Application
    Filed: August 2, 2021
    Publication date: February 10, 2022
    Inventors: Jeff Washington, Geoffrey J. Budd, Nikhita Singh, Jake Sganga, Alexander Li Honda
  • Publication number: 20220040862
    Abstract: A lab system identifies a set of steps associated with a protocol for a lab meant to be performed by a robot within the lab using equipment and reagents. The lab system renders, within a user interface, a virtual representation of the lab, a virtual robot, and virtual equipment and reagents. Responsive to operating in a first mode, the lab system simulates the identified set of steps identify virtual positions of the virtual robot within the lab as the virtual robot performs the steps and modifies the virtual representation of the lab to mirror the identified positions of the virtual robot in real-time. Responsive to operating in a second mode, the lab system identifies positions of the robot within the lab as the robot performs the identified set of steps and modifies the virtual representation of the lab to mirror the identified positions of the robot in real-time.
    Type: Application
    Filed: August 2, 2021
    Publication date: February 10, 2022
    Inventors: Jeff Washington, Geoffrey J. Budd, Nikhita Singh, Jake Sganga, Alexander Li Honda
  • Publication number: 20220040856
    Abstract: A lab system accesses a first protocol for performance by a first robot in a first lab. The first protocol includes a set of steps, each associated with an operation, reagent, and equipment. For each of one or more steps, the lab system modifies the step by: (1) identifying one or more replacement operations that achieve an equivalent or substantially similar result as a performance of the operation, (2) identifying replacement equipment that operates substantially similarly to the equipment, and/or (3) identifying one or more replacement reagents that, when substituted for the reagent, do not substantially affect the performance of the step. The lab system generates a modified protocol by replacing one or more of the set of steps with the modified steps. The lab system selects a second lab including a second and configures the second robot to perform the modified protocol in the second lab.
    Type: Application
    Filed: August 2, 2021
    Publication date: February 10, 2022
    Inventors: Jeff Washington, Geoffrey J. Budd, Nikhita Singh, Jake Sganga, Alexander Li Honda
  • Publication number: 20220040863
    Abstract: A lab system calibrates robots and cameras within a lab. The lab system accesses, via a camera within a lab, an image of a robot arm, which comprises a visible tag located on an exterior. The lab system determines a position of the robot arm using position sensors located within the robot arm and determines a location of the camera relative to the robot arm based on the determined position and the location of the tag. The lab system calibrates the camera using the determined location of the camera relative to the robot arm. After calibrating the camera, the lab system accesses, via the camera, a second image of equipment in the lab that comprises a second visible tag on an exterior. The lab system determines, based on a location of the second visible tag within the accessed second image, a location of the equipment relative to the robot arm.
    Type: Application
    Filed: August 2, 2021
    Publication date: February 10, 2022
    Inventors: Jeff Washington, Geoffrey J. Budd, Nikhita Singh, Jake Sganga, Alexander Li Honda
  • Patent number: 10956784
    Abstract: An image creation and editing tool can use the data produced from training a neural network to add stylized representations of an object to an image. An object classification will correspond to an object representation, and pixel values for the object representation can be added to, or blended with, the pixel values of an image in order to add a visualization of a type of object to the image. Such an approach can be used to add stylized representations of objects to existing images or create new images based on those representations. The visualizations can be used to create patterns and textures as well, as may be used to paint or fill various regions of an image. Such patterns can enable regions to be filled where image data has been deleted, such as to remove an undesired object, in a way that appears natural for the contents of the image.
    Type: Grant
    Filed: December 17, 2018
    Date of Patent: March 23, 2021
    Assignee: A9.COM, INC.
    Inventors: Douglas Ryan Gray, Alexander Li Honda, Edward Hsiao
  • Publication number: 20190138851
    Abstract: An image creation and editing tool can use the data produced from training a neural network to add stylized representations of an object to an image. An object classification will correspond to an object representation, and pixel values for the object representation can be added to, or blended with, the pixel values of an image in order to add a visualization of a type of object to the image. Such an approach can be used to add stylized representations of objects to existing images or create new images based on those representations. The visualizations can be used to create patterns and textures as well, as may be used to paint or fill various regions of an image. Such patterns can enable regions to be filled where image data has been deleted, such as to remove an undesired object, in a way that appears natural for the contents of the image.
    Type: Application
    Filed: December 17, 2018
    Publication date: May 9, 2019
    Inventors: Douglas Ryan Gray, Alexander Li Honda, Edward Hsiao
  • Patent number: 10157332
    Abstract: An image creation and editing tool can use the data produced from training a neural network to add stylized representations of an object to an image. An object classification will correspond to an object representation, and pixel values for the object representation can be added to, or blended with, the pixel values of an image in order to add a visualization of a type of object to the image. Such an approach can be used to add stylized representations of objects to existing images or create new images based on those representations. The visualizations can be used to create patterns and textures as well, as may be used to paint or fill various regions of an image. Such patterns can enable regions to be filled where image data has been deleted, such as to remove an undesired object, in a way that appears natural for the contents of the image.
    Type: Grant
    Filed: June 6, 2016
    Date of Patent: December 18, 2018
    Assignee: A9.com, Inc.
    Inventors: Douglas Ryan Gray, Alexander Li Honda, Edward Hsiao
  • Patent number: 10019140
    Abstract: Approaches are described for managing a display of content on a computing device. Content (e.g., images, application data, etc.) is displayed on an interface of the device. An activation movement performed by a user (e.g., a double-tap) can cause the device to enable a content view control mode (such as a zoom control mode) that can be used to adjust a portion of the content being displayed on the interface. The activation movement can also be used to set an area of interest and display a graphical element indicating that the content view control mode is activated. In response to a motion being detected (e.g., a forward tilt or backward of the device), the device can adjust a portion of the content being displayed on the interface, such as displaying a “zoomed-in” portion or a “zoomed-out” portion of the image.
    Type: Grant
    Filed: June 26, 2014
    Date of Patent: July 10, 2018
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Matthew Paul Bell, Peter Cheng, Stephen Michael Polansky, Amber Nalu, Alexander Li Honda, Yi Ding, David Wayne Stafford, Kenneth Mark Karakotsios
  • Publication number: 20160026261
    Abstract: An electronic device can be configured to enable a user to provide input via a tap of the device without the use of touch sensors (e.g., resistive, capacitive, ultrasonic or other acoustic, infrared or other optical, or piezoelectric touch technologies) and/or mechanical switches. Such a device can include other sensors, including inertial sensors (e.g., accelerometers, gyroscopes, or a combination thereof), microphones, proximity sensors, ambient light sensors, and/or cameras, among others, that can be used to capture respective sensor data. Feature values with respect to the respective sensor data can be extracted, and the feature values can be analyzed using machine learning to determine when the user has tapped on the electronic device. Detection of a single tap or multiple taps performed on the electronic device can be utilized to control the device.
    Type: Application
    Filed: July 24, 2014
    Publication date: January 28, 2016
    Inventors: Peter Cheng, Steven Scott Noble, Matthew Paul Bell, Yi Ding, Stephen Michael Polansky, Alexander Li Honda
  • Patent number: 9235278
    Abstract: An electronic device can be configured to enable a user to provide input via a tap of the device without the use of touch sensors (e.g., resistive, capacitive, ultrasonic or other acoustic, infrared or other optical, or piezoelectric touch technologies) and/or mechanical switches. Such a device can include other sensors, including inertial sensors (e.g., accelerometers, gyroscopes, or a combination thereof), microphones, proximity sensors, ambient light sensors, and/or cameras, among others, that can be used to capture respective sensor data. Feature values with respect to the respective sensor data can be extracted, and the feature values can be analyzed using machine learning to determine when the user has tapped on the electronic device. Detection of a single tap or multiple taps performed on the electronic device can be utilized to control the device.
    Type: Grant
    Filed: July 24, 2014
    Date of Patent: January 12, 2016
    Assignee: Amazon Technologies, Inc.
    Inventors: Peter Cheng, Steven Scott Noble, Matthew Paul Bell, Yi Ding, Stephen Michael Polansky, Alexander Li Honda