Patents by Inventor Jonathan Su

Jonathan Su has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10366439
    Abstract: Systems and methods for regional item recommendations are provided. In example embodiments, an indication of a destination geolocation from a user device of a user is received. Destination data corresponding to the destination geolocation is retrieved. A destination characteristic from the destination data is extracted. The destination characteristic indicates an affinity for apparel associated with the destination geolocation. A candidate apparel item is determined based on the extracted destination characteristic. An item listing corresponding to the candidate apparel item is identified. The item listing is presented on a user interface of the user device.
    Type: Grant
    Filed: December 22, 2014
    Date of Patent: July 30, 2019
    Assignee: eBay Inc.
    Inventors: Cheri Nola Leonard, Jiri Medlen, Jonathan Su, Mihir Naware, Jatin Chhugani, Neelakantan Sundaresan
  • Publication number: 20190057428
    Abstract: Techniques for mapping size information associated with a client to target brands, garments, sizes, shapes, and styles for which there is no standardized correlation. The size information associated with a client may be generated by modeling client garments, accessing computer aided drawing (CAD) files associated with client garments, or by analyzing a history of garment purchases associated with the client. Information for target garments may be generated in a similar fashion. A system may then create a standardized scale with a set of sizes for a target, and map a client base size to that standardized size scale. Similar matching and mapping may also be done with shape and style considerations. A recommendation based on the mapping may then be communicated to the client.
    Type: Application
    Filed: October 23, 2018
    Publication date: February 21, 2019
    Inventors: Jonathan Su, Mihir Naware, Jatin Chhugani, Neelakantan Sundaresan
  • Patent number: 10204375
    Abstract: Techniques for generating a digital wardrobe are presented herein. A transceiver can be configured to receive a request having a garment identifier and a user identifier. Additionally, an access module can be configured to access a first garment model, access a body model of the user corresponding to the user identifier, and access a second garment model corresponding to the user identifier. Furthermore, a processor can be configured by a garment simulation module to position the body model inside the first garment model and the second garment model, and calculate simulated forces based on the positioning. Moreover, a rendering module can be configured to generate an image of the garment models draped on the body model based on the calculated simulated forces. Subsequently, a display module can be configured to cause presentation of the generated image on a display of a device.
    Type: Grant
    Filed: December 1, 2014
    Date of Patent: February 12, 2019
    Assignee: eBay Inc.
    Inventors: Jonathan Su, Jatin Chhugani, Mihir Naware, Neelakantan Sundaresan
  • Publication number: 20190019056
    Abstract: In general, certain embodiments of the present disclosure provide methods and systems for object detection by a neural network comprising a convolution-nonlinearity step and a recurrent step. In a training mode, a dataset is passed into the neural network, and the neural network is trained to accurately output a box size and a center location of an object of interest. The box size corresponds to the smallest possible bounding box around the object of interest and the center location corresponds to the location of the center of the bounding box. In an inference mode, an image that is not part of the dataset is passed into the neural network. The neural network automatically identifies an object of interest and draws a box around the identified object of interest. The box drawn around the identified object of interest corresponds to the smallest possible bounding box around the object of interest.
    Type: Application
    Filed: September 10, 2018
    Publication date: January 17, 2019
    Applicant: Pilot AI Labs, Inc.
    Inventors: Brian Pierce, Elliot English, Ankit Kumar, Jonathan Su
  • Publication number: 20180350140
    Abstract: Techniques for extraction of body parameters, dimensions and shape of a customer are presented herein. A model descriptive of a garment, a corresponding calibration factor and reference garment shapes can be assessed. A garment shape corresponding to the three-dimensional model can be selected from the reference garment shapes based on a comparison of the three-dimensional model with the reference garment shapes. A reference feature from the plurality of reference features may be associated with the model feature. A measurement of the reference feature may be calculated based on the association and the calibration factor. The computed measurement can be stored in a body profile associated with a user. An avatar can be generated for the user based on the body profile and be used to show or indicate fit of a garment, as well as make fit and size recommendations.
    Type: Application
    Filed: August 3, 2018
    Publication date: December 6, 2018
    Inventors: Jonathan Su, Mihir Naware, Jatin Chhugani
  • Patent number: 10078794
    Abstract: In general, certain embodiments of the present disclosure provide methods and systems for object detection by a neural network comprising a convolution-nonlinearity step and a recurrent step. In a training mode, a dataset is passed into the neural network, and the neural network is trained to accurately output a box size and a center location of an object of interest. The box size corresponds to the smallest possible bounding box around the object of interest and the center location corresponds to the location of the center of the bounding box. In an inference mode, an image that is not part of the dataset is passed into the neural network. The neural network automatically identifies an object of interest and draws a box around the identified object of interest. The box drawn around the identified object of interest corresponds to the smallest possible bounding box around the object of interest.
    Type: Grant
    Filed: November 30, 2016
    Date of Patent: September 18, 2018
    Assignee: PILOT AI LABS, INC.
    Inventors: Brian Pierce, Elliot English, Ankit Kumar, Jonathan Su
  • Patent number: 10068371
    Abstract: Techniques for extraction of body parameters, dimensions and shape of a customer are presented herein. A model descriptive of a garment, a corresponding calibration factor and reference garment shapes can be assessed. A garment shape corresponding to the three-dimensional model can be selected from the reference garment shapes based on a comparison of the three-dimensional model with the reference garment shapes. A reference feature from the plurality of reference features may be associated with the model feature. A measurement of the reference feature may be calculated based on the association and the calibration factor. The computed measurement can be stored in a body profile associated with a user. An avatar can be generated for the user based on the body profile and be used to show or indicate fit of a garment, as well as make fit and size recommendations.
    Type: Grant
    Filed: August 29, 2014
    Date of Patent: September 4, 2018
    Assignee: eBay Inc.
    Inventors: Jonathan Su, Mihir Naware, Jatin Chhugani
  • Patent number: 9984409
    Abstract: Techniques for generated and presenting images of items within user selected context images are presented herein. In an example embodiment, an access module can be configured to receive a first environment image. A simulation module coupled to the access module may process the environment image to identify placement areas within the image, and an imaging module may merge an item image with the environment image and filter the merged image in an erosion area. In various embodiments, the items and environments may be selected by a user and presented to a user in real-time or near-real time as part of an online shopping experience. In further embodiments, the environments may be processed from images taken by a device of the user.
    Type: Grant
    Filed: December 22, 2014
    Date of Patent: May 29, 2018
    Assignee: eBay Inc.
    Inventors: Mihir Naware, Jatin Chhugani, Jonathan Su
  • Patent number: 9953460
    Abstract: Techniques for three-dimensional garment simulation using parallel computing are presented herein. An access module can be configured to access a three-dimensional garment model of a garment. The garment model can include garment points that represent a surface of the garment. A processor, having a plurality of cores, can be configured by a garment simulation module to calculate one or more exerted forces on a subset of garment points. Additionally, the garment simulation module can generate cross pairs and apportion the generated cross pairs among the plurality of cores. Moreover, the garment simulation module can determine, using the plurality of vector execution units in parallel based on an organized data layout, whether boundaries of the first subgroup of cross pairs are overlapping based on the one or more exerted forces. Subsequently, the garment simulation module can calculate one or more simulated forces acting on the garment points based on the determination.
    Type: Grant
    Filed: June 14, 2016
    Date of Patent: April 24, 2018
    Assignee: eBay Inc.
    Inventors: Jatin Chhugani, Jonathan Su, Mihir Naware
  • Publication number: 20170161591
    Abstract: According to various embodiments, a method for deep-learning based object tracking by a neural network is provided. The method comprises a training mode and an inference mode. In the training mode, the method includes: passing a dataset into the neural network, the dataset including a first image frame and a second image frame; and training the neural network to accurately output a similarity measure for the first and second image frames. In the inference mode, the method includes: passing a plurality of image frames into the neural network, wherein the plurality of image frames is not part of the dataset, the plurality of image frames comprising a first image frame and a second image frame, the first image frame including a first bounding box around an object and the second image frame including a second bounding box around an object; and automatically determining whether the object bounded by the first bounding box is the same object as the object bounded by the second bounding box.
    Type: Application
    Filed: December 2, 2016
    Publication date: June 8, 2017
    Applicant: Pilot AI Labs, Inc.
    Inventors: Elliot English, Ankit Kumar, Brian Pierce, Jonathan Su
  • Publication number: 20170161607
    Abstract: According to various embodiments, a method for gesture recognition using a neural network is provided. The method comprises a training mode and an inference mode. In the training mode, the method includes: passing a dataset into the neural network; and training the neural network to recognize a gesture of interest, wherein the neural network includes a convolution-nonlinearity step and a recurrent step. The inference mode, the method includes: passing a series of images into the neural network, wherein the series of images is not part of the dataset; and recognizing the gesture of interest in the series of images.
    Type: Application
    Filed: December 5, 2016
    Publication date: June 8, 2017
    Inventors: Elliot English, Ankit Kumar, Brian Pierce, Jonathan Su
  • Publication number: 20170161911
    Abstract: According to various embodiments, a method for distance and velocity estimation of detected objects is provided. The method includes receiving an image that includes a minimal bounding box around an object of interest. The method also includes calculating a noisy estimate of the physical position of the object of interest relative to a source of the image. Last, the method includes producing a smooth estimate of the physical position of the object of interest using the noisy estimate.
    Type: Application
    Filed: December 5, 2016
    Publication date: June 8, 2017
    Applicant: Pilot AI Labs, Inc.
    Inventors: Ankit Kumar, Brian Pierce, Elliot English, Jonathan Su
  • Publication number: 20170160751
    Abstract: According to various embodiments, a method for controlling drone movement for object tracking is provided. The method comprises: receiving a position and a velocity of a target; receiving sensor input from a drone; determining an angular velocity and a linear velocity for the drone; and controlling movement of the drone to track the target using the determined angular velocity and linear velocity.
    Type: Application
    Filed: December 5, 2016
    Publication date: June 8, 2017
    Applicant: Pilot AI Labs, Inc.
    Inventors: Brian Pierce, Elliot English, Ankit Kumar, Jonathan Su
  • Publication number: 20170161555
    Abstract: According to various embodiments, a method for gesture recognition using a neural network is provided. The method comprises a training mode and an inference mode. In the training mode, the method includes: passing a dataset into the neural network; and training the neural network to recognize the fingers of a training user and a gesture of interest, wherein the neural network includes a convolution-nonlinearity step and a recurrent step. In the inference mode, the method includes: passing a series of images into the neural network, wherein the series of image is a virtual reality feed that includes the hands of a VR user; and recognizing the fingers of the VR user and gestures of interests from the series of images.
    Type: Application
    Filed: December 5, 2016
    Publication date: June 8, 2017
    Applicant: Pilot AI Labs, Inc.
    Inventors: Ankit Kumar, Brian Pierce, Elliot English, Jonathan Su
  • Publication number: 20170161592
    Abstract: According to various embodiments, a method for neural network dataset enhancement is provided. The method comprises taking a first picture using a fixed camera of just a set background, then taking a second picture with the fixed camera. The second picture is taken with the set background and an object of interest in the picture frame. The method further comprises extracting pixels of the image of the object of interest from the second picture, and superimposing the pixels of the image of the object of interest onto a plurality of different images.
    Type: Application
    Filed: December 5, 2016
    Publication date: June 8, 2017
    Applicant: Pilot AI Labs, Inc.
    Inventors: Jonathan Su, Ankit Kumar, Brian Pierce, Elliot English
  • Publication number: 20170154425
    Abstract: In general, certain embodiments of the present disclosure provide methods and systems for object detection by a neural network comprising a convolution-nonlinearity step and a recurrent step. In a training mode, a dataset is passed into the neural network, and the neural network is trained to accurately output a box size and a center location of an object of interest. The box size corresponds to the smallest possible bounding box around the object of interest and the center location corresponds to the location of the center of the bounding box. In an inference mode, an image that is not part of the dataset is passed into the neural network. The neural network automatically identifies an object of interest and draws a box around the identified object of interest. The box drawn around the identified object of interest corresponds to the smallest possible bounding box around the object of interest.
    Type: Application
    Filed: November 30, 2016
    Publication date: June 1, 2017
    Applicant: Pilot Al Labs, Inc.
    Inventors: Brian Pierce, Elliot English, Ankit Kumar, Jonathan Su
  • Publication number: 20160292915
    Abstract: Techniques for three-dimensional garment simulation using parallel computing are presented herein. An access module can be configured to access a three-dimensional garment model of a garment. The garment model can include garment points that represent a surface of the garment. A processor, having a plurality of cores, can be configured by a garment simulation module to calculate one or more exerted forces on a subset of garment points. Additionally, the garment simulation module can generate cross pairs and apportion the generated cross pairs among the plurality of cores. Moreover, the garment simulation module can determine, using the plurality of vector execution units in parallel based on an organized data layout, whether boundaries of the first subgroup of cross pairs are overlapping based on the one or more exerted forces. Subsequently, the garment simulation module can calculate one or more simulated forces acting on the garment points based on the determination.
    Type: Application
    Filed: June 14, 2016
    Publication date: October 6, 2016
    Inventors: Jatin Chhugani, Jonathan Su, Mihir Naware
  • Patent number: 9378593
    Abstract: Techniques for three-dimensional garment simulation using parallel computing are presented herein. An access module can be configured to access a three-dimensional garment model of a garment. The garment model can include garment points that represent a surface of the garment. A processor, having a plurality of cores, can be configured by a garment simulation module to calculate one or more exerted forces on a subset of garment points. Additionally, the garment simulation module can generate cross pairs and apportion the generated cross pairs among the plurality of cores. Moreover, the garment simulation module can determine, using the plurality of vector execution units in parallel based on an organized data layout, whether boundaries of the first subgroup of cross pairs are overlapping based on the one or more exerted forces. Subsequently, the garment simulation module can calculate one or more simulated forces acting on the garment points based on the determination.
    Type: Grant
    Filed: October 31, 2014
    Date of Patent: June 28, 2016
    Assignee: eBay Inc.
    Inventors: Jatin Chhugani, Jonathan Su, Mihir Naware
  • Publication number: 20160180449
    Abstract: Techniques for generated and presenting images of items within user selected context images are presented herein. In an example embodiment, an access module can be configured to receive a first environment model and a first wearable item model. A simulation module coupled to the access module may process the environment model to identify placement volumes within the environment model and to place a clothed body model within the placement volume to generate a context model. A rendering module may then generate a context image from the context model. In various embodiments, the environment model used for the context, the wearable item positioned within the environment model, and rendering values used to generate context images may be changed in response to user inputs to generate new context images that are displayed to a user.
    Type: Application
    Filed: December 23, 2014
    Publication date: June 23, 2016
    Inventors: Mihir Naware, Jatin Chhugani, Jonathan Su
  • Publication number: 20160180562
    Abstract: Techniques for generated and presenting images of items within user selected context images are presented herein. In an example embodiment, an access module can be configured to receive a first environment image. A simulation module coupled to the access module may process the environment image to identify placement areas within the image, and an imaging module may merge an item image with the environment image and filter the merged image in an erosion area. In various embodiments, the items and environments may be selected by a user and presented to a user in real-time or near-real time as part of an online shopping experience. In further embodiments, the environments may be processed from images taken by a device of the user.
    Type: Application
    Filed: December 22, 2014
    Publication date: June 23, 2016
    Inventors: Mihir Naware, Jatin Chhugani, Jonathan Su