Patents by Inventor Alexander Li Honda
Alexander Li Honda has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11958198Abstract: A lab automation system receives an instruction from a user to perform a protocol within a lab via an interface including a graphical representation of the lab. The lab includes a robot and set of equipment rendered within the graphical representation of the lab. The lab automation system identifies an ambiguous term of the instruction and pieces of equipment corresponding to the ambiguous term and modifies the interface to include a predictive text interface element listing the pieces of equipment. Upon a mouseover of a listed piece of equipment within the predictive text interface element, the lab automation system modifies the graphical representation of the lab to highlight the listed piece of equipment corresponding to the mouseover. Upon a selection of the listed piece of equipment within the predictive text interface element, the lab automation system modifies the instruction to include the listed piece of equipment.Type: GrantFiled: August 2, 2021Date of Patent: April 16, 2024Assignee: Artificial, Inc.Inventors: Jeff Washington, Geoffrey J. Budd, Nikhita Singh, Jake Sganga, Alexander Li Honda
-
Patent number: 11919174Abstract: A lab system identifies a set of steps associated with a protocol for a lab meant to be performed by a robot within the lab using equipment and reagents. The lab system renders, within a user interface, a virtual representation of the lab, a virtual robot, and virtual equipment and reagents. Responsive to operating in a first mode, the lab system simulates the identified set of steps identify virtual positions of the virtual robot within the lab as the virtual robot performs the steps and modifies the virtual representation of the lab to mirror the identified positions of the virtual robot in real-time. Responsive to operating in a second mode, the lab system identifies positions of the robot within the lab as the robot performs the identified set of steps and modifies the virtual representation of the lab to mirror the identified positions of the robot in real-time.Type: GrantFiled: August 2, 2021Date of Patent: March 5, 2024Assignee: Artificial, Inc.Inventors: Jeff Washington, Geoffrey J. Budd, Nikhita Singh, Jake Sganga, Alexander Li Honda
-
Patent number: 11897144Abstract: A lab system accesses a first protocol for performance by a first robot in a first lab. The first protocol includes a set of steps, each associated with an operation, reagent, and equipment. For each of one or more steps, the lab system modifies the step by: (1) identifying one or more replacement operations that achieve an equivalent or substantially similar result as a performance of the operation, (2) identifying replacement equipment that operates substantially similarly to the equipment, and/or (3) identifying one or more replacement reagents that, when substituted for the reagent, do not substantially affect the performance of the step. The lab system generates a modified protocol by replacing one or more of the set of steps with the modified steps. The lab system selects a second lab including a second and configures the second robot to perform the modified protocol in the second lab.Type: GrantFiled: August 2, 2021Date of Patent: February 13, 2024Assignee: Artificial, Inc.Inventors: Jeff Washington, Geoffrey J. Budd, Nikhita Singh, Jake Sganga, Alexander Li Honda
-
Publication number: 20220043561Abstract: A lab automation system receives an instruction from a user to perform a protocol within a lab via an interface including a graphical representation of the lab. The lab includes a robot and set of equipment rendered within the graphical representation of the lab. The lab automation system identifies an ambiguous term of the instruction and pieces of equipment corresponding to the ambiguous term and modifies the interface to include a predictive text interface element listing the pieces of equipment. Upon a mouseover of a listed piece of equipment within the predictive text interface element, the lab automation system modifies the graphical representation of the lab to highlight the listed piece of equipment corresponding to the mouseover. Upon a selection of the listed piece of equipment within the predictive text interface element, the lab automation system modifies the instruction to include the listed piece of equipment.Type: ApplicationFiled: August 2, 2021Publication date: February 10, 2022Inventors: Jeff Washington, Geoffrey J. Budd, Nikhita Singh, Jake Sganga, Alexander Li Honda
-
Publication number: 20220040853Abstract: A lab system configures robots to performs protocols in labs. The lab automation system receives, via a user interface, an instruction from a user to perform a protocol within a lab. The instruction may comprise text, and the lab may comprise a robot configured to perform the protocol. The lab system converts, using a machine learned model, the text into steps and, for each step, identifies one or more of an operation, lab equipment, and reagent associated with the step. In response to detecting an ambiguity/error associated with the step, the lab system notifies the user via the user interface of the ambiguity/error. The lab system may receive one or more indications from the user that resolve the ambiguity/error and update the associated steps. For each step, the lab system configures the robot to perform an identified operation, interact with identified lab equipment, and/or access/use an identified reagent.Type: ApplicationFiled: August 2, 2021Publication date: February 10, 2022Inventors: Jeff Washington, Geoffrey J. Budd, Nikhita Singh, Jake Sganga, Alexander Li Honda
-
Publication number: 20220040862Abstract: A lab system identifies a set of steps associated with a protocol for a lab meant to be performed by a robot within the lab using equipment and reagents. The lab system renders, within a user interface, a virtual representation of the lab, a virtual robot, and virtual equipment and reagents. Responsive to operating in a first mode, the lab system simulates the identified set of steps identify virtual positions of the virtual robot within the lab as the virtual robot performs the steps and modifies the virtual representation of the lab to mirror the identified positions of the virtual robot in real-time. Responsive to operating in a second mode, the lab system identifies positions of the robot within the lab as the robot performs the identified set of steps and modifies the virtual representation of the lab to mirror the identified positions of the robot in real-time.Type: ApplicationFiled: August 2, 2021Publication date: February 10, 2022Inventors: Jeff Washington, Geoffrey J. Budd, Nikhita Singh, Jake Sganga, Alexander Li Honda
-
Publication number: 20220040856Abstract: A lab system accesses a first protocol for performance by a first robot in a first lab. The first protocol includes a set of steps, each associated with an operation, reagent, and equipment. For each of one or more steps, the lab system modifies the step by: (1) identifying one or more replacement operations that achieve an equivalent or substantially similar result as a performance of the operation, (2) identifying replacement equipment that operates substantially similarly to the equipment, and/or (3) identifying one or more replacement reagents that, when substituted for the reagent, do not substantially affect the performance of the step. The lab system generates a modified protocol by replacing one or more of the set of steps with the modified steps. The lab system selects a second lab including a second and configures the second robot to perform the modified protocol in the second lab.Type: ApplicationFiled: August 2, 2021Publication date: February 10, 2022Inventors: Jeff Washington, Geoffrey J. Budd, Nikhita Singh, Jake Sganga, Alexander Li Honda
-
Publication number: 20220040863Abstract: A lab system calibrates robots and cameras within a lab. The lab system accesses, via a camera within a lab, an image of a robot arm, which comprises a visible tag located on an exterior. The lab system determines a position of the robot arm using position sensors located within the robot arm and determines a location of the camera relative to the robot arm based on the determined position and the location of the tag. The lab system calibrates the camera using the determined location of the camera relative to the robot arm. After calibrating the camera, the lab system accesses, via the camera, a second image of equipment in the lab that comprises a second visible tag on an exterior. The lab system determines, based on a location of the second visible tag within the accessed second image, a location of the equipment relative to the robot arm.Type: ApplicationFiled: August 2, 2021Publication date: February 10, 2022Inventors: Jeff Washington, Geoffrey J. Budd, Nikhita Singh, Jake Sganga, Alexander Li Honda
-
Patent number: 10956784Abstract: An image creation and editing tool can use the data produced from training a neural network to add stylized representations of an object to an image. An object classification will correspond to an object representation, and pixel values for the object representation can be added to, or blended with, the pixel values of an image in order to add a visualization of a type of object to the image. Such an approach can be used to add stylized representations of objects to existing images or create new images based on those representations. The visualizations can be used to create patterns and textures as well, as may be used to paint or fill various regions of an image. Such patterns can enable regions to be filled where image data has been deleted, such as to remove an undesired object, in a way that appears natural for the contents of the image.Type: GrantFiled: December 17, 2018Date of Patent: March 23, 2021Assignee: A9.COM, INC.Inventors: Douglas Ryan Gray, Alexander Li Honda, Edward Hsiao
-
Publication number: 20190138851Abstract: An image creation and editing tool can use the data produced from training a neural network to add stylized representations of an object to an image. An object classification will correspond to an object representation, and pixel values for the object representation can be added to, or blended with, the pixel values of an image in order to add a visualization of a type of object to the image. Such an approach can be used to add stylized representations of objects to existing images or create new images based on those representations. The visualizations can be used to create patterns and textures as well, as may be used to paint or fill various regions of an image. Such patterns can enable regions to be filled where image data has been deleted, such as to remove an undesired object, in a way that appears natural for the contents of the image.Type: ApplicationFiled: December 17, 2018Publication date: May 9, 2019Inventors: Douglas Ryan Gray, Alexander Li Honda, Edward Hsiao
-
Patent number: 10157332Abstract: An image creation and editing tool can use the data produced from training a neural network to add stylized representations of an object to an image. An object classification will correspond to an object representation, and pixel values for the object representation can be added to, or blended with, the pixel values of an image in order to add a visualization of a type of object to the image. Such an approach can be used to add stylized representations of objects to existing images or create new images based on those representations. The visualizations can be used to create patterns and textures as well, as may be used to paint or fill various regions of an image. Such patterns can enable regions to be filled where image data has been deleted, such as to remove an undesired object, in a way that appears natural for the contents of the image.Type: GrantFiled: June 6, 2016Date of Patent: December 18, 2018Assignee: A9.com, Inc.Inventors: Douglas Ryan Gray, Alexander Li Honda, Edward Hsiao
-
Patent number: 10019140Abstract: Approaches are described for managing a display of content on a computing device. Content (e.g., images, application data, etc.) is displayed on an interface of the device. An activation movement performed by a user (e.g., a double-tap) can cause the device to enable a content view control mode (such as a zoom control mode) that can be used to adjust a portion of the content being displayed on the interface. The activation movement can also be used to set an area of interest and display a graphical element indicating that the content view control mode is activated. In response to a motion being detected (e.g., a forward tilt or backward of the device), the device can adjust a portion of the content being displayed on the interface, such as displaying a “zoomed-in” portion or a “zoomed-out” portion of the image.Type: GrantFiled: June 26, 2014Date of Patent: July 10, 2018Assignee: AMAZON TECHNOLOGIES, INC.Inventors: Matthew Paul Bell, Peter Cheng, Stephen Michael Polansky, Amber Nalu, Alexander Li Honda, Yi Ding, David Wayne Stafford, Kenneth Mark Karakotsios
-
Publication number: 20160026261Abstract: An electronic device can be configured to enable a user to provide input via a tap of the device without the use of touch sensors (e.g., resistive, capacitive, ultrasonic or other acoustic, infrared or other optical, or piezoelectric touch technologies) and/or mechanical switches. Such a device can include other sensors, including inertial sensors (e.g., accelerometers, gyroscopes, or a combination thereof), microphones, proximity sensors, ambient light sensors, and/or cameras, among others, that can be used to capture respective sensor data. Feature values with respect to the respective sensor data can be extracted, and the feature values can be analyzed using machine learning to determine when the user has tapped on the electronic device. Detection of a single tap or multiple taps performed on the electronic device can be utilized to control the device.Type: ApplicationFiled: July 24, 2014Publication date: January 28, 2016Inventors: Peter Cheng, Steven Scott Noble, Matthew Paul Bell, Yi Ding, Stephen Michael Polansky, Alexander Li Honda
-
Patent number: 9235278Abstract: An electronic device can be configured to enable a user to provide input via a tap of the device without the use of touch sensors (e.g., resistive, capacitive, ultrasonic or other acoustic, infrared or other optical, or piezoelectric touch technologies) and/or mechanical switches. Such a device can include other sensors, including inertial sensors (e.g., accelerometers, gyroscopes, or a combination thereof), microphones, proximity sensors, ambient light sensors, and/or cameras, among others, that can be used to capture respective sensor data. Feature values with respect to the respective sensor data can be extracted, and the feature values can be analyzed using machine learning to determine when the user has tapped on the electronic device. Detection of a single tap or multiple taps performed on the electronic device can be utilized to control the device.Type: GrantFiled: July 24, 2014Date of Patent: January 12, 2016Assignee: Amazon Technologies, Inc.Inventors: Peter Cheng, Steven Scott Noble, Matthew Paul Bell, Yi Ding, Stephen Michael Polansky, Alexander Li Honda