Patents by Inventor Jared Max Browarnik
Jared Max Browarnik has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12198183Abstract: A dynamic media-product searching platform (DMPSP) transforms media source, product, and user inputs into product metadata and transactions outputs. In some implementations, the DMPSP may receive an indication that a user is interacting with a media source, provide a product overlay to the user indicating products within the media source available for purchase, receive from the user a selection of a product, send product information to the user via the product overlay, receive from the user an indication of interest in purchasing the product, and process a transaction for the user to purchase the product.Type: GrantFiled: July 15, 2020Date of Patent: January 14, 2025Assignee: Painted Dog, Inc.Inventors: Tyler Harrison Cooper, Vincent Alexander Crossley, Jared Max Browarnik
-
Publication number: 20240403946Abstract: Current interfaces for displaying information about items appearing in videos are obtrusive and counterintuitive. They also rely on annotations, or metadata tags, added by hand to the frames in the video, limiting their ability to display information about items in the videos. In contrast, examples of the systems disclosed here use neural networks to identify items appearing on- and off-screen in response to intuitive user voice queries, touchscreen taps, and/or cursor movements. These systems display information about the on- and off-screen items dynamically and unobtrusively to avoid disrupting the viewing experience.Type: ApplicationFiled: April 11, 2024Publication date: December 5, 2024Applicant: Painted Dog, Inc.Inventors: Vincent Alexander Crossley, Jared Max Browarnik, Tyler Harrison Cooper, Carl Ducey Jamilkowski
-
Patent number: 12062026Abstract: Shoppable video enables a viewer to identify and buy items appearing in a video. To retrieve information about the items in a frame of the video, the playback device generates a perceptual hash of that frame and uses that hash to query a first database storing perceptual hashes of different version of the video. The database query returns an identifier for the frame, which is then used to query a second database that store the item information. The results of this query are returned to the playback device, which shows them to the user, enabling the viewer to learn more about and possibly purchase the item. Using queries based on perceptual hashes of different versions of the video increases the likelihood of returning a match, despite formatting differences. And using separate hash and metadata databases makes it possible to update the metadata without changing the hashes.Type: GrantFiled: July 17, 2023Date of Patent: August 13, 2024Assignee: Painted Dog, Inc.Inventors: Jared Max Browarnik, Ken Aizawa
-
Patent number: 11966967Abstract: Current interfaces for displaying information about items appearing in videos are obtrusive and counterintuitive. They also rely on annotations, or metadata tags, added by hand to the frames in the video, limiting their ability to display information about items in the videos. In contrast, examples of the systems disclosed here use neural networks to identify items appearing on- and off-screen in response to intuitive user voice queries, touchscreen taps, and/or cursor movements. These systems display information about the on- and off-screen items dynamically and unobtrusively to avoid disrupting the viewing experience.Type: GrantFiled: March 23, 2022Date of Patent: April 23, 2024Assignee: Painted Dog, Inc.Inventors: Vincent Alexander Crossley, Jared Max Browarnik, Tyler Harrison Cooper, Carl Ducey Jamilkowski
-
Publication number: 20240028867Abstract: Training neural networks to recognize matching items requires large data sets and long training times. Conversely, training a neural network with triplets of similar objects instead of triplets of identical objects relaxes constraints on the size and content of the data set, making training easier. Moreover, the notion of “similarity” can be almost arbitrary, making it possible to train the neural network to associate objects that aren't visually similar. For instance, the neural network can be trained to associate a suit with a tie, which is not possible with training on identical objects. And because the neural network is trained to recognize similar items, it can also recognize unfamiliar items if they are similar enough to the training data. This is a technical improvement over other neural networks, which can only recognize identical items, and over collaborative filtering systems, which can only recognize items for which they have enough data.Type: ApplicationFiled: September 29, 2023Publication date: January 25, 2024Applicant: Painted Dog, Inc.Inventors: Ken Aizawa, Jared Max Browarnik
-
Publication number: 20240013178Abstract: Shoppable video enables a viewer to identify and buy items appearing in a video. To retrieve information about the items in a frame of the video, the playback device generates a perceptual hash of that frame and uses that hash to query a first database storing perceptual hashes of different version of the video. The database query returns an identifier for the frame, which is then used to query a second database that store the item information. The results of this query are returned to the playback device, which shows them to the user, enabling the viewer to learn more about and possibly purchase the item. Using queries based on perceptual hashes of different versions of the video increases the likelihood of returning a match, despite formatting differences. And using separate hash and metadata databases makes it possible to update the metadata without changing the hashes.Type: ApplicationFiled: July 17, 2023Publication date: January 11, 2024Applicant: Painted Dog, Inc.Inventors: Jared Max Browarnik, Ken Aizawa
-
Patent number: 11775800Abstract: Training neural networks to recognize matching items requires large data sets and long training times. Conversely, training a neural network with triplets of similar objects instead of triplets of identical objects relaxes constraints on the size and content of the data set, making training easier. Moreover, the notion of “similarity” can be almost arbitrary, making it possible to train the neural network to associate objects that aren't visually similar. For instance, the neural network can be trained to associate a suit with a tie, which is not possible with training on identical objects. And because the neural network is trained to recognize similar items, it can also recognize unfamiliar items if they are similar enough to the training data. This is a technical improvement over other neural networks, which can only recognize identical items, and over collaborative filtering systems, which can only recognize items for which they have enough data.Type: GrantFiled: August 6, 2019Date of Patent: October 3, 2023Assignee: Painted Dog, Inc.Inventors: Ken Aizawa, Jared Max Browarnik
-
Patent number: 11727375Abstract: Shoppable video enables a viewer to identify and buy items appearing in a video. To retrieve information about the items in a frame of the video, the playback device generates a perceptual hash of that frame and uses that hash to query a first database storing perceptual hashes of different version of the video. The database query returns an identifier for the frame, which is then used to query a second database that store the item information. The results of this query are returned to the playback device, which shows them to the user, enabling the viewer to learn more about and possibly purchase the item. Using queries based on perceptual hashes of different versions of the video increases the likelihood of returning a match, despite formatting differences. And using separate hash and metadata databases makes it possible to update the metadata without changing the hashes.Type: GrantFiled: April 6, 2022Date of Patent: August 15, 2023Assignee: Painted Dog, Inc.Inventors: Jared Max Browarnik, Ken Aizawa
-
Publication number: 20220229867Abstract: Shoppable video enables a viewer to identify and buy items appearing in a video. To retrieve information about the items in a frame of the video, the playback device generates a perceptual hash of that frame and uses that hash to query a first database storing perceptual hashes of different version of the video. The database query returns an identifier for the frame, which is then used to query a second database that store the item information. The results of this query are returned to the playback device, which shows them to the user, enabling the viewer to learn more about and possibly purchase the item. Using queries based on perceptual hashes of different versions of the video increases the likelihood of returning a match, despite formatting differences. And using separate hash and metadata databases makes it possible to update the metadata without changing the hashes.Type: ApplicationFiled: April 6, 2022Publication date: July 21, 2022Applicant: Painted Dog, Inc.Inventors: Jared Max Browarnik, Ken Aizawa
-
Publication number: 20220217444Abstract: Current interfaces for displaying information about items appearing in videos are obtrusive and counterintuitive. They also rely on annotations, or metadata tags, added by hand to the frames in the video, limiting their ability to display information about items in the videos. In contrast, examples of the systems disclosed here use neural networks to identify items appearing on- and off-screen in response to intuitive user voice queries, touchscreen taps, and/or cursor movements. These systems display information about the on- and off-screen items dynamically and unobtrusively to avoid disrupting the viewing experience.Type: ApplicationFiled: March 23, 2022Publication date: July 7, 2022Applicant: Painted Dog, Inc.Inventors: Vincent Alexander Crossley, Jared Max Browarnik, Tyler Harrison Cooper, Carl Ducey Jamilkowski
-
Patent number: 11321389Abstract: Shoppable video enables a viewer to identify and buy items appearing in a video. To retrieve information about the items in a frame of the video, the playback device generates a perceptual hash of that frame and uses that hash to query a first database storing perceptual hashes of different version of the video. The database query returns an identifier for the frame, which is then used to query a second database that store the item information. The results of this query are returned to the playback device, which shows them to the user, enabling the viewer to learn more about and possibly purchase the item. Using queries based on perceptual hashes of different versions of the video increases the likelihood of returning a match, despite formatting differences. And using separate hash and metadata databases makes it possible to update the metadata without changing the hashes.Type: GrantFiled: July 2, 2020Date of Patent: May 3, 2022Assignee: Painted Dog, Inc.Inventors: Jared Max Browarnik, Ken Aizawa
-
Patent number: 11317159Abstract: Current interfaces for displaying information about items appearing in videos are obtrusive and counterintuitive. They also rely on annotations, or metadata tags, added by hand to the frames in the video, limiting their ability to display information about items in the videos. In contrast, examples of the systems disclosed here use neural networks to identify items appearing on- and off-screen in response to intuitive user voice queries, touchscreen taps, and/or cursor movements. These systems display information about the on- and off-screen items dynamically and unobtrusively to avoid disrupting the viewing experience.Type: GrantFiled: May 10, 2019Date of Patent: April 26, 2022Assignee: Painted Dog, Inc.Inventors: Vincent Alexander Crossley, Jared Max Browarnik, Tyler Harrison Cooper, Carl Ducey Jamilkowski
-
Publication number: 20210256058Abstract: Shoppable video enables a viewer to identify and buy items appearing in a video. To retrieve information about the items in a frame of the video, the playback device generates a perceptual hash of that frame and uses that hash to query a first database storing perceptual hashes of different version of the video. The database query returns an identifier for the frame, which is then used to query a second database that store the item information. The results of this query are returned to the playback device, which shows them to the user, enabling the viewer to learn more about and possibly purchase the item. Using queries based on perceptual hashes of different versions of the video increases the likelihood of returning a match, despite formatting differences. And using separate hash and metadata databases makes it possible to update the metadata without changing the hashes.Type: ApplicationFiled: July 2, 2020Publication date: August 19, 2021Inventors: Jared Max Browarnik, Ken Aizawa
-
Patent number: 10748206Abstract: The DYNAMIC MEDIA-PRODUCT SEARCHING PLATFORM APPARATUSES, METHODS AND SYSTEMS (“DMPSP”) transforms media source, product, and user inputs into product metadata and transactions outputs. In some implementations, the DMPSP may receive an indication that a user is interacting with a media source, provide a product overlay to the user indicating products within the media source available for purchase, receive from the user a selection of a product, send product information to the user via the product overlay, receive from the user an indication of interest in purchasing the product, and process a transaction for the user to purchase the product.Type: GrantFiled: October 30, 2014Date of Patent: August 18, 2020Assignee: Painted Dog, Inc.Inventors: Tyler Harrison Cooper, Vincent Alexander Crossley, Jared Max Browarnik
-
Publication number: 20200134320Abstract: Current interfaces for displaying information about items appearing in videos are obtrusive and counterintuitive. They also rely on annotations, or metadata tags, added by hand to the frames in the video, limiting their ability to display information about items in the videos. In contrast, examples of the systems disclosed here use neural networks to identify items appearing on- and off-screen in response to intuitive user voice queries, touchscreen taps, and/or cursor movements. These systems display information about the on- and off-screen items dynamically and unobtrusively to avoid disrupting the viewing experience.Type: ApplicationFiled: May 10, 2019Publication date: April 30, 2020Inventors: Vincent Alexander Crossley, Jared Max Browarnik, Tyler Harrison Cooper, Carl Ducey Jamilkowski
-
Publication number: 20190362233Abstract: Training neural networks to recognize matching items requires large data sets and long training times. Conversely, training a neural network with triplets of similar objects instead of triplets of identical objects relaxes constraints on the size and content of the data set, making training easier. Moreover, the notion of “similarity” can be almost arbitrary, making it possible to train the neural network to associate objects that aren't visually similar. For instance, the neural network can be trained to associate a suit with a tie, which is not possible with training on identical objects. And because the neural network is trained to recognize similar items, it can also recognize unfamiliar items if they are similar enough to the training data. This is a technical improvement over other neural networks, which can only recognize identical items, and over collaborative filtering systems, which can only recognize items for which they have enough data.Type: ApplicationFiled: August 6, 2019Publication date: November 28, 2019Inventors: Ken Aizawa, Jared Max Browarnik