Patents by Inventor Bin Ni

Bin Ni has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20210240453
    Abstract: Implementations are described herein for generating embeddings of source code using both the language and graph domains, and leveraging combinations of these semantically-rich and structurally-informative embeddings for various purposes. In various implementations, tokens of a source code snippet may be applied as input across a sequence-processing machine learning model to generate a plurality of token embeddings. A graph may also be generated based on the source code snippet. A joint representation may be generated based on the graph and the incorporated token embeddings. The joint representation generated from the source code snippet may be compared to one or more other joint representations generated from one or more other source code snippets to make a determination about the source code snippet.
    Type: Application
    Filed: February 4, 2020
    Publication date: August 5, 2021
    Inventors: Rohan Badlani, Owen Lewis, Georgios Evangelopoulos, Olivia Hatalsky, Bin Ni
  • Patent number: 11064284
    Abstract: An in-ear device includes a housing shaped to hold the in-ear device in an ear of a user, and an audio package, disposed in the housing, to emit augmented sound. A first set of one or more microphones is positioned to receive external sound, and a controller is coupled to the audio package and the first set of one or more microphones. The controller includes a low-latency audio processing path, digital control parameters, and logic that when executed by the controller causes the in-ear device to perform operations. The operations may include receiving the external sound with the first set of one or more microphones to generate a low-latency sound signal; augmenting the low-latency sound signal by passing the low-latency sound signal through the low-latency audio processing path to produce an augmented sound signal; and outputting, with the audio package, the augmented sound based on the augmented sound signal.
    Type: Grant
    Filed: December 28, 2018
    Date of Patent: July 13, 2021
    Assignee: X Development LLC
    Inventors: Jason Rugolo, Bin Ni, Cyrus Behroozi
  • Patent number: 11048482
    Abstract: Implementations are described herein for automatically identifying, recommending, and/or automatically effecting changes to a source code base based on updates previously made to other similar code bases. Intuitively, multiple prior “migrations,” or mass updates, of complex software system code bases may be analyzed to identify changes that were made. More particularly, a particular portion or “snippet” of source code—which may include a whole source code file, a source code function, a portion of source code, or any other semantically-meaningful code unit—may undergo a sequence of edits over time. Techniques described herein leverage this sequence of edits to predict a next edit of the source code snippet. These techniques have a wide variety of applications, including but not limited to automatically updating of source code, source code completion, recommending changes to source code, etc.
    Type: Grant
    Filed: July 26, 2019
    Date of Patent: June 29, 2021
    Assignee: X DEVELOPMENT LLC
    Inventors: Georgios Evangelopoulos, Benoit Schillings, Bin Ni
  • Patent number: 10911673
    Abstract: In an embodiment, an electronic device may be configured to capture still frames during video capture but may capture the still frames in the 4×3 aspect ratio and at higher resolution than the 16×9 aspect ratio video frames. The device may interleave high resolution, 4×3 frames and lower resolution 16×9 frames in the video sequence, and may capture the nearest higher resolution, 4×3 frame when the user indicates the capture of a still frame. Alternatively, the device may display 16×9 frames in the video sequence, and then expand to 4×3 frames when a shutter button is pressed. The device may capture the still frame and return to the 16×9 video frames responsive to a release of the shutter button.
    Type: Grant
    Filed: November 4, 2019
    Date of Patent: February 2, 2021
    Assignee: Apple Inc.
    Inventors: D. Amnon Silverstein, Shun Wai Go, Suk Hwan Lim, Timothy J. Millet, Ting Chen, Bin Ni
  • Publication number: 20210026605
    Abstract: Implementations are described herein for automatically identifying, recommending, and/or automatically effecting changes to a source code base based on updates previously made to other similar code bases. Intuitively, multiple prior “migrations,” or mass updates, of complex software system code bases may be analyzed to identify changes that were made. More particularly, a particular portion or “snippet” of source code—which may include a whole source code file, a source code function, a portion of source code, or any other semantically-meaningful code unit—may undergo a sequence of edits over time. Techniques described herein leverage this sequence of edits to predict a next edit of the source code snippet. These techniques have a wide variety of applications, including but not limited to automatically updating of source code, source code completion, recommending changes to source code, etc.
    Type: Application
    Filed: July 26, 2019
    Publication date: January 28, 2021
    Inventors: Georgios Evangelopoulos, Benoit Schillings, Bin Ni
  • Publication number: 20210011694
    Abstract: Techniques are described herein for translating source code in one programming language to source code in another programming language using machine learning. In various implementations, one or more components of one or more generative adversarial networks, such as a generator machine learning model, may be trained to generate “synthetically-naturalistic” source code that can be used as a translation of source code in an unfamiliar language. In some implementations, a discriminator machine learning model may be employed to aid in training the generator machine learning model, e.g., by being trained to discriminate between human-generated (“genuine”) and machine-generated (“synthetic”) source code.
    Type: Application
    Filed: July 9, 2019
    Publication date: January 14, 2021
    Inventors: Bin Ni, Zhiqiang Yuan, Qianyu Zhang
  • Publication number: 20210004210
    Abstract: Techniques are described herein for using artificial intelligence to “learn,” statistically, a target programming style that is imposed in and/or evidenced by a code base. Once the target programming style is learned, it can be used for various purposes. In various implementations, one or more generative adversarial networks (“GANs”), each including a generator machine learning model and a discriminator machine learning model, may be trained to facilitate learning and application of target programming style(s). In some implementations, the discriminator(s) and/or generator(s) may operate on graphical input, and may take the form of graph neural networks (“GNNs”), graph attention neural networks (“GANNs”), graph convolutional networks (“GCNs”), etc., although this is not required.
    Type: Application
    Filed: July 1, 2019
    Publication date: January 7, 2021
    Inventors: Georgios Evangelopoulos, Olivia Hatalsky, Bin Ni, Qianyu Zhang
  • Patent number: 10885902
    Abstract: Techniques are described for using stenography to protect sensitive information within conversational audio data by generating a pseudo-language representation of conversational audio data. In some implementations, audio data corresponding to an utterance is received. The audio data is classified as likely sensitive audio data. A particular set of sentiments associated with the audio data is determined. Data indicating the particular set of sentiments associated with the audio data is provided to a model. The model is trained to output, for each of different sets of sentiments, desensitized, pseudo-language audio data that exhibits the set of sentiments, and is not classified as likely sensitive audio data. A particular desensitized, pseudo-language audio data is received from the model. The audio data is replaced with the particular desensitized, pseudo-language audio data and stored within an audio data repository.
    Type: Grant
    Filed: November 21, 2018
    Date of Patent: January 5, 2021
    Assignee: X Development LLC
    Inventors: Antonio Raymond Papania-Davis, Bin Ni, Shelby Lin
  • Patent number: 10861228
    Abstract: A system to optically measure an ear includes a controller with logic that when executed by the controller causes the system to perform operations. Operations may include capturing the one or more images of the ear using the one or more image sensors, and generating image data from the one or more images. 3D keypoints of the ear are calculated from the image data, and a 3D model of the ear is generated using the 3D keypoints.
    Type: Grant
    Filed: December 28, 2018
    Date of Patent: December 8, 2020
    Assignee: X Development LLC
    Inventors: Jason Rugolo, Bin Ni, Daniel George
  • Publication number: 20200371778
    Abstract: Implementations are described herein for automatically identifying, recommending, and/or effecting changes to a legacy source code base by leveraging knowledge gained from prior updates made to other similar legacy code bases. In some implementations, data associated with a first version source code snippet may be applied as input across a machine learning model to generate a new source code embedding in a latent space. Reference embedding(s) may be identified in the latent space based on their distance(s) from the new source code embedding in the latent space. The reference embedding(s) may be associated with individual changes made during the prior code base update(s). Based on the identified one or more reference embeddings, change(s) to be made to the first version source code snippet to create a second version source code snippet may be identified, recommended, and/or effected.
    Type: Application
    Filed: May 21, 2019
    Publication date: November 26, 2020
    Inventors: Bin Ni, Benoit Schillings, Georgios Evangelopoulos, Olivia Hatalsky, Qianyu Zhang, Grigory Bronevetsky
  • Patent number: 10848838
    Abstract: One or more computing devices, systems, and/or methods for generating and/or presenting time-lapse videos and/or live-stream videos are provided. For example, a plurality of video frames may be extracted from a video. A first set of video frames and a second set of video frames may be identified from the plurality of video frames. The first set of video frames may be combined to generate a first time-lapse video frame and the second set of video frames may be combined to generate a second time-lapse video frame. A time-lapse video may be generated based upon the first time-lapse video frame and the second time-lapse video frame. In another example, a time-lapse video may be generated based upon a recorded video associated with a live-stream video. The time-lapse video may be presented. Responsive to a completion of the presenting the time-lapse video, the live-stream video may be presented.
    Type: Grant
    Filed: June 17, 2019
    Date of Patent: November 24, 2020
    Assignee: Oath Inc.
    Inventors: Bin Ni, Benoit Schillings, Stephen Lee Hodnicki, Michael Chang-Chen
  • Publication number: 20200275133
    Abstract: Disclosed are systems and methods for improving interactions with and between computers in content generating, searching, hosting and/or providing systems supported by or configured with personal computing devices, servers and/or platforms. The systems interact to identify and retrieve data within or across platforms, which can be used to improve the quality of data used in processing interactions between or among processors in such systems. The disclosed systems and methods automatically analyze a live streaming media file, and identify portions of the media that are highlights. The content classified as a highlight can be shared across social media platforms, and indexed for searching respective to attributes of the video content. The streaming and highlight media content is renderable in a novel, modified video player that enables variable playback speeds for how content is classified, and enables on-demand selections of specific content portions and adjustable rendering displays during streaming.
    Type: Application
    Filed: May 13, 2020
    Publication date: August 27, 2020
    Inventors: Bin NI, Kirk Lieb, Rick Hawes, Yale Song, Benoit Schillings, Vahe Oughourlian, Jordi Vallmitjana, Jennelle Nystrom, Hardik Ruparel, Michael Chen, Adam Mathes, Arunkumar Balasubramanian, Jian Zhou, Matt Edelman
  • Publication number: 20200211277
    Abstract: A system to optically measure an ear includes a controller with logic that when executed by the controller causes the system to perform operations. Operations may include capturing the one or more images of the ear using the one or more image sensors, and generating image data from the one or more images. 3D keypoints of the ear are calculated from the image data, and a 3D model of the ear is generated using the 3D keypoints.
    Type: Application
    Filed: December 28, 2018
    Publication date: July 2, 2020
    Inventors: Jason Rugolo, Bin Ni, Daniel George
  • Publication number: 20200213711
    Abstract: An in-ear device includes a housing shaped to hold the in-ear device in an ear of a user, and an audio package, disposed in the housing, to emit augmented sound. A first set of one or more microphones is positioned to receive external sound, and a controller is coupled to the audio package and the first set of one or more microphones. The controller includes a low-latency audio processing path, digital control parameters, and logic that when executed by the controller causes the in-ear device to perform operations. The operations may include receiving the external sound with the first set of one or more microphones to generate a low-latency sound signal; augmenting the low-latency sound signal by passing the low-latency sound signal through the low-latency audio processing path to produce an augmented sound signal; and outputting, with the audio package, the augmented sound based on the augmented sound signal.
    Type: Application
    Filed: December 28, 2018
    Publication date: July 2, 2020
    Inventors: Jason Rugolo, Bin Ni, Cyrus Behroozi
  • Patent number: 10681391
    Abstract: Disclosed are systems and methods for improving interactions with and between computers in content generating, searching, hosting and/or providing systems supported by or configured with personal computing devices, servers and/or platforms. The systems interact to identify and retrieve data within or across platforms, which can be used to improve the quality of data used in processing interactions between or among processors in such systems. The disclosed systems and methods automatically analyze a live streaming media file, and identify portions of the media that are highlights. The content classified as a highlight can be shared across social media platforms, and indexed for searching respective to attributes of the video content. The streaming and highlight media content is renderable in a novel, modified video player that enables variable playback speeds for how content is classified, and enables on-demand selections of specific content portions and adjustable rendering displays during streaming.
    Type: Grant
    Filed: July 13, 2016
    Date of Patent: June 9, 2020
    Assignee: OATH INC.
    Inventors: Bin Ni, Kirk Lieb, Rick Hawes, Yale Song, Benoit Schillings, Vahe Oughourlian, Jordi Vallmitjana, Jennelle Nystrom, Hardik Ruparel, Michael Chen, Adam Mathes, Arunkumar Balasubramanian, Jian Zhou, Matt Edelman
  • Patent number: 10678992
    Abstract: Generating notifications comprising text and image data for client devices with limited display screens is disclosed. An image to be included in the notification is resized and reshaped using image processing techniques. The resized image is further analyzed to identify optimal portions for placing the text data. The text data can also be analyzed and shortened for including at the identified portion of resized image to generate a notification. The resulting notification displays the text and image data optimally within the limited screen space of the client device so that a user observing the notification can obtain the information at a glance.
    Type: Grant
    Filed: May 21, 2019
    Date of Patent: June 9, 2020
    Assignee: OATH INC.
    Inventors: Bin Ni, Jia Li
  • Publication number: 20200068129
    Abstract: In an embodiment, an electronic device may be configured to capture still frames during video capture but may capture the still frames in the 4×3 aspect ratio and at higher resolution than the 16×9 aspect ratio video frames. The device may interleave high resolution, 4×3 frames and lower resolution 16×9 frames in the video sequence, and may capture the nearest higher resolution, 4×3 frame when the user indicates the capture of a still frame. Alternatively, the device may display 16×9 frames in the video sequence, and then expand to 4×3 frames when a shutter button is pressed. The device may capture the still frame and return to the 16×9 video frames responsive to a release of the shutter button.
    Type: Application
    Filed: November 4, 2019
    Publication date: February 27, 2020
    Inventors: D. Amnon Silverstein, Shun Wai Go, Suk Hwan Lim, Timothy J. Millet, Ting Chen, Bin Ni
  • Patent number: 10573048
    Abstract: One or more computing devices, systems, and/or methods for emotional reaction sharing are provided. For example, a client device captures video of a user viewing content, such as a live stream video. Landmark points, corresponding to facial features of the user, are identified and provided to a user reaction distribution service that evaluates the landmark points to identify a facial expression of the user, such as a crying facial expression. The facial expression, such as landmark points that can be applied to a three-dimensional model of an avatar to recreate the facial expression, are provided to client devices of users viewing the content, such as a second client device. The second client device applies the landmark points of the facial expression to a bone structure mapping and a muscle movement mapping to create an expressive avatar having the facial expression for display to a second user.
    Type: Grant
    Filed: July 25, 2016
    Date of Patent: February 25, 2020
    Assignee: Oath Inc.
    Inventors: Bin Ni, Gregory Davis Choi, Adam Bryan Mathes
  • Patent number: 10498960
    Abstract: In an embodiment, an electronic device may be configured to capture still frames during video capture but may capture the still frames in the 4×3 aspect ratio and at higher resolution than the 16×9 aspect ratio video frames. The device may interleave high resolution, 4×3 frames and lower resolution 16×9 frames in the video sequence, and may capture the nearest higher resolution, 4×3 frame when the user indicates the capture of a still frame. Alternatively, the device may display 16×9 frames in the video sequence, and then expand to 4×3 frames when a shutter button is pressed. The device may capture the still frame and return to the 16×9 video frames responsive to a release of the shutter button.
    Type: Grant
    Filed: June 27, 2018
    Date of Patent: December 3, 2019
    Assignee: Apple Inc.
    Inventors: D. Amnon Silverstein, Shun Wai Go, Suk Hwan Lim, Timothy J. Millet, Ting Chen, Bin Ni
  • Publication number: 20190306589
    Abstract: One or more computing devices, systems, and/or methods for generating and/or presenting time-lapse videos and/or live-stream videos are provided. For example, a plurality of video frames may be extracted from a video. A first set of video frames and a second set of video frames may be identified from the plurality of video frames. The first set of video frames may be combined to generate a first time-lapse video frame and the second set of video frames may be combined to generate a second time-lapse video frame. A time-lapse video may be generated based upon the first time-lapse video frame and the second time-lapse video frame. In another example, a time-lapse video may be generated based upon a recorded video associated with a live-stream video. The time-lapse video may be presented. Responsive to a completion of the presenting the time-lapse video, the live-stream video may be presented.
    Type: Application
    Filed: June 17, 2019
    Publication date: October 3, 2019
    Inventors: Bin Ni, Benoit Schillings, Stephen Lee Hodnicki, Michael Chang-Chen