Patents by Inventor Linjun Yang

Linjun Yang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11938567
    Abstract: A laser fusion welding device includes a 1.9 ?m laser light source, a control unit and a light spot adjusting device. The control unit is configured to control the laser light source and the light spot adjusting device to adjust a laser power density at an object to be subjected to fusion welding. The 1.9 ?m laser light source has output power of 100-500 W. The control unit includes a time control unit, a power control unit and a light spot control unit. The time control unit is configured to control a turn-on time of the laser light source. The power control unit is configured to control the output power of the laser light source. The light spot control unit is configured to control the light spot adjusting device to adjust a size of a light spot at the object to be subjected to fusion welding.
    Type: Grant
    Filed: July 11, 2023
    Date of Patent: March 26, 2024
    Assignee: XINJIANG TECHNICAL INSTITUTE OF PHYSICS AND CHEMISTRY, CHINESE ACADEMY OF SCIENCES
    Inventors: Linjun Li, Shilie Pan, Xiaoming Duan, Yu Zhou, Yingjie Shen, Qianqian Hao, Yuqiang Yang, Xin He
  • Patent number: 11372914
    Abstract: The description relates to diversified hybrid image annotation for annotating images. One implementation includes generating first image annotations for a query image using a retrieval-based image annotation technique. Second image annotations can be generated for the query image using a model-based image annotation technique. The first and second image annotations can be integrated to generate a diversified hybrid image annotation result for the query image.
    Type: Grant
    Filed: March 26, 2018
    Date of Patent: June 28, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Yokesh Kumar, Kuang-Huei Lee, Houdong Hu, Li Huang, Arun Sacheti, Meenaz Merchant, Linjun Yang, Tianjun Xiao, Saurajit Mukherjee
  • Patent number: 11074289
    Abstract: Systems and methods can be implemented to conduct searches based on images used as queries in a variety of applications. In various embodiments, a set of visual words representing a query image are generated from features extracted from the query image and are compared with visual words of index images. A set of candidate images is generated from the index images resulting from matching one or more visual words in the comparison. A multi-level ranking is conducted to sort the candidate images of the set of candidate images, and results of the multi-level ranking are returned to a user device that provided the query image. Additional systems and methods are disclosed.
    Type: Grant
    Filed: January 31, 2018
    Date of Patent: July 27, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Houdong Hu, Yan Wang, Linjun Yang, Li Huang, Xi Chen, Jiapei Huang, Ye Wu, Arun K. Sacheti, Meenaz Merchant
  • Patent number: 10664515
    Abstract: Systems, computing devices, and methods for performing an image search are presented. A search query including an image is received from a user. A segment associated with the image is identified. A user intent associated with the image and the segment is identified. Search results associated with the identified segment and user intent are generated, and presented to the user.
    Type: Grant
    Filed: December 18, 2015
    Date of Patent: May 26, 2020
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Arun Sacheti, Ming Ye, Linjun Yang, Karim Hasham, Pavel Komlev
  • Patent number: 10592769
    Abstract: Techniques describe submitting a video clip as a query by a user. A process retrieves images and information associated with the images in response to the query. The process decomposes the video clip into a sequence of frames to extract the features in a frame and to quantize the extracted features into descriptive words. The process further tracks the extracted features as points in the frame, a first set of points to correspond to a second set of points in consecutive frames to construct a sequence of points. Then the process identifies the points that satisfy criteria of being stable points and being centrally located in the frame to represent the video clip as a bag of descriptive words for searching for images and information related to the video clip.
    Type: Grant
    Filed: August 18, 2016
    Date of Patent: March 17, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Linjun Yang, Xian-Sheng Hua, Yang Cai
  • Publication number: 20200019628
    Abstract: Representative embodiments disclose mechanisms to perform visual intent classification or visual intent detection or both on an image. Visual intent classification utilizes a trained machine learning model that classifies subjects in the image according to a classification taxonomy. The visual intent classification can be used as a pre-triggering mechanism to initiate further action in order to substantially save processing time. Example further actions include user scenarios, query formulation, user experience enhancement, and so forth. Visual intent detection utilizes a trained machine learning model to identify subjects in an image, place a bounding box around the image, and classify the subject according to the taxonomy. The trained machine learning model utilizes multiple feature detectors, multi-layer predictions, multilabel classifiers, and bounding box regression.
    Type: Application
    Filed: July 16, 2018
    Publication date: January 16, 2020
    Inventors: Xi Chen, Houdong Hu, Li Huang, Jiapei Huang, Arun Sacheti, Linjun Yang, Rui Xia, Kuang-Huei Lee, Meenaz Merchant, Sean Chang Culatana
  • Publication number: 20190294705
    Abstract: The description relates to diversified hybrid image annotation for annotating images. One implementation includes generating first image annotations for a query image using a retrieval-based image annotation technique. Second image annotations can be generated for the query image using a model-based image annotation technique. The first and second image annotations can be integrated to generate a diversified hybrid image annotation result for the query image.
    Type: Application
    Filed: March 26, 2018
    Publication date: September 26, 2019
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Yokesh KUMAR, Kuang-Huei LEE, Houdong HU, Li HUANG, Arun SACHETI, Meenaz MERCHANT, Linjun YANG, Tianjun XIAO, Saurajit MUKHERJEE
  • Publication number: 20190243910
    Abstract: Systems and methods can be implemented to conduct a visual search as a service in a variety of applications. In various embodiments, a system is configured to provide searching capabilities of content provided by a first entity in response to a search request by a second entity. An image provided by the second entity can be used by the system as a query image to search the content of the first entity. In an embodiment, the first entity can be a commercial entity providing such a system with image related content regarding its products and services such that any number of individual consumers can search for particular products and services of the commercial entity via their communication enabled devices. In addition, such systems can be arranged for other embodiments to provide customized searches of a single source by many individual devices. Additional systems and methods are disclosed.
    Type: Application
    Filed: February 5, 2018
    Publication date: August 8, 2019
    Inventors: Yan Wang, Houdong Hu, Li Huang, Arun K. Sacheti, Linjun Yang
  • Publication number: 20190236167
    Abstract: Systems and methods can be implemented to conduct searches based on images used as queries in a variety of applications. In various embodiments, a set of visual words representing a query image are generated from features extracted from the query image and are compared with visual words of index images. A set of candidate images is generated from the index images resulting from matching one or more visual words in the comparison. A multi-level ranking is conducted to sort the candidate images of the set of candidate images, and results of the multi-level ranking are returned to a user device that provided the query image. Additional systems and methods are disclosed.
    Type: Application
    Filed: January 31, 2018
    Publication date: August 1, 2019
    Inventors: Houdong Hu, Yan Wang, Linjun Yang, Li Huang, Xi Chen, Jiapei Huang, Ye Wu, Arun K. Sacheti, Meenaz Merchant
  • Publication number: 20190236487
    Abstract: A technique for hyperparameter tuning can be performed via a hyperparameter tuning tool. In the technique, computer-readable values for each of one or more machine learning hyperparameters can be received. Multiple computer-readable hyperparameter value sets can be defined using different combinations of the values. In response to a request to start, an overall hyperparameter tuning operation can be performed via the tool, with the overall operation including a tuning job for each of the hyperparameter sets. A computer-readable comparison of the results of the parameter tuning operations can be generated for the hyperparameter sets, with the comparison indicating effectiveness of the hyperparameter sets, as compared to each other, in the tuning jobs.
    Type: Application
    Filed: January 30, 2018
    Publication date: August 1, 2019
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Jiapei Huang, Houdong Hu, Li Huang, Xi Chen, Linjun Yang
  • Publication number: 20190066304
    Abstract: Systems and methods related to segmenting objects detected in an input view via a camera application in a live camera mode of an electronic device are disclosed herein. In some example aspects, a real-time object segmentation system is provided that receives input views during the live camera mode. The live camera mode may consist of at least one input view that is displayed on the screen of the electronic device prior to the capturing of a static image. The live camera mode may receive multiple views as the electronic device is moved, and these input views may be processed using at least one machine-learning algorithm to identify (or recognize) one or more objects. Based on the identification of the object or objects within the input view, at least one selectable action response may be provided to the user.
    Type: Application
    Filed: August 31, 2017
    Publication date: February 28, 2019
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Ryuichi HIRANO, Li HUANG, Eun Ji LEE, Mark-Gil Bongato PARAYNO, Linjun YANG, Meenaz Aliraza MERCHANT
  • Publication number: 20160358036
    Abstract: Techniques describe submitting a video clip as a query by a user. A process retrieves images and information associated with the images in response to the query. The process decomposes the video clip into a sequence of frames to extract the features in a frame and to quantize the extracted features into descriptive words. The process further tracks the extracted features as points in the frame, a first set of points to correspond to a second set of points in consecutive frames to construct a sequence of points. Then the process identifies the points that satisfy criteria of being stable points and being centrally located in the frame to represent the video clip as a bag of descriptive words for searching for images and information related to the video clip.
    Type: Application
    Filed: August 18, 2016
    Publication date: December 8, 2016
    Inventors: Linjun Yang, Xian-Sheng Hua, Yang Cai
  • Publication number: 20160350333
    Abstract: Systems, computing devices, and methods for performing an image search are presented. A search query including an image is received from a user. A segment associated with the image is identified. A user intent associated with the image and the segment is identified. Search results associated with the identified segment and user intent are generated, and presented to the user.
    Type: Application
    Filed: December 18, 2015
    Publication date: December 1, 2016
    Inventors: ARUN SACHETI, MING YE, LINJUN YANG, KARIM HASHAM, PAVEL KOMLEV
  • Patent number: 9443011
    Abstract: Techniques describe submitting a video clip as a query by a user. A process retrieves images and information associated with the images in response to the query. The process decomposes the video clip into a sequence of frames to extract the features in a frame and to quantize the extracted features into descriptive words. The process further tracks the extracted features as points in the frame, a first set of points to correspond to a second set of points in consecutive frames to construct a sequence of points. Then the process identifies the points that satisfy criteria of being stable points and being centrally located in the frame to represent the video clip as a bag of descriptive words for searching for images and information related to the video clip.
    Type: Grant
    Filed: May 18, 2011
    Date of Patent: September 13, 2016
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Linjun Yang, Xian-Sheng Hua, Yang Cai
  • Publication number: 20160247070
    Abstract: Technologies for a human computation framework suitable for answering common sense questions that are difficult for computers to answer but easy for humans to answer. The technologies support solving general common sense problems without a priori knowledge of the problems; support for determining whether an answer is from a bot or human so as to screen out spurious answers from bots; support for distilling answers collected from human users to ensure high quality solutions to the questions asked; and support for preventing malicious elements in or out of the system from attacking other system elements or contaminating the solutions produced by the system, and preventing users from being compensated without contributing answers.
    Type: Application
    Filed: May 3, 2016
    Publication date: August 25, 2016
    Inventors: Shipeng Li, Yang Yang, Bin Benjamin Zhu, Rui Guo, Linjun Yang
  • Patent number: 9424516
    Abstract: Technologies for a human computation framework suitable for answering common sense questions that are difficult for computers to answer but easy for humans to answer. The technologies support solving general common sense problems without a priori knowledge of the problems; support for determining whether an answer is from a bot or human so as to screen out spurious answers from bots; support for distilling answers collected from human users to ensure high quality solutions to the questions asked; and support for preventing malicious elements in or out of the system from attacking other system elements or contaminating the solutions produced by the system, and preventing users from being compensated without contributing answers.
    Type: Grant
    Filed: November 1, 2012
    Date of Patent: August 23, 2016
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Shipeng Li, Yang Yang, Bin Benjamin Zhu, Rui Guo, Linjun Yang
  • Patent number: 9317533
    Abstract: Adaptive image retrieval image allows retrieval of images that are more likely to reflect a current trend of user preferences and/or interests, and therefore can provide relevant results to an image search. Adaptive image retrieval includes receiving image query log data from one or more clients, and updating a codebook of features based on the received query log data. The image query log data includes images that have been queried by the one or more clients within a predetermined period of time.
    Type: Grant
    Filed: November 2, 2010
    Date of Patent: April 19, 2016
    Assignee: Microsoft Technology Licensing, Inc.
    Inventors: Linjun Yang, Qi Tian, Bingbing Ni
  • Publication number: 20150356199
    Abstract: The description relates to click-through-based cross-view learning for internet searches. One implementation includes determining distances among textual queries and/or visual images in a click-through-based structured latent subspace. Given new content, results can be sorted based on the distances in the click-through-based structured latent subspace.
    Type: Application
    Filed: July 3, 2014
    Publication date: December 10, 2015
    Applicant: MICROSOFT CORPORATION
    Inventors: Tao MEI, Yong RUI, Linjun YANG, Ting YAO
  • Publication number: 20150332124
    Abstract: A similarity of a first video to a second video may be identified automatically. Images are received from the videos, and divided into sub-images. The sub-images are evaluated based on a feature common to each of the sub-images. Binary representations of the images may be created based on the evaluation of the sub-images. A similarity of the first video to the second video may be determined based on a number of occurrences of a binary representation in the first video and the second video.
    Type: Application
    Filed: July 27, 2015
    Publication date: November 19, 2015
    Inventors: Linjun Yang, Lifeng Shang, Xian-Sheng Hua, Fei Wang
  • Patent number: 9092520
    Abstract: A similarity of a first video to a second video may be identified automatically. Images are received from the videos, and divided into sub-images. The sub-images are evaluated based on a feature common to each of the sub-images. Binary representations of the images may be created based on the evaluation of the sub-images. A similarity of the first video to the second video may be determined based on a number of occurrences of a binary representation in the first video and the second video.
    Type: Grant
    Filed: June 20, 2011
    Date of Patent: July 28, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Linjun Yang, Lifeng Shang, Xian-Sheng Hua, Fei Wang