Patents by Inventor Abhishek Kar
Abhishek Kar has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20220327007Abstract: Techniques are disclosed for migrating one or more services from an edge device to a cloud computing environment. In one example, a migration service receives a request to migrate a first set of services from the edge device to the cloud computing environment. The migration service identifies a hardware profile of a computing device (or devices) of the cloud computing environment that matches the edge device, and then configures the computing device to execute a second set of services that corresponds to the first set of services. The migration service establishes a communication channel between the edge device and the computing device, and then executes a set of migration operations such that the second set of services is configured to execute as the first set of services. The computing device may operate in a virtual bootstrap environment or dedicated region of the cloud computing environment.Type: ApplicationFiled: January 21, 2022Publication date: October 13, 2022Applicant: Oracle International CorporationInventors: Eden Grail Adogla, David Dale Becker, Maxim Baturin, Brijesh Singh, Iliya Roitburg, Abhishek Kar
-
Patent number: 11436275Abstract: Provided are mechanisms and processes for performing visual search using multi-view digital media representations, such as surround views. In one example, a process includes receiving a visual search query that includes a surround view of an object to be searched, where the surround view includes spatial information, scale information, and different viewpoint images of the object. The surround view is compared to stored surround views by comparing spatial information and scale information of the surround view to spatial information and scale information of the stored surround views. A correspondence measure is then generated indicating the degree of similarity between the surround view and a possible match. At least one search result is then transmitted with a corresponding image in response to the visual search query.Type: GrantFiled: August 29, 2019Date of Patent: September 6, 2022Assignee: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Abhishek Kar, Alexander Jay Bruen Trevor, Pantelis Kalogiros, Ioannis Spanos, Radu Bogdan Rusu
-
Publication number: 20220236126Abstract: Devices and methods for detecting axial forces applied to a container are provided. The devices can include a device housing, a container section, a force measurement sensor, and a processing section. The device housing can extend between a first housing end and a second housing end along a longitudinal axis. The container section can be mounted to the housing proximate the first housing end. The container section can have an open first section end and a closed second section end spaced apart along the longitudinal axis and at least one sidewall extending therebetween. The container section can define a cavity bounded by the first section end, the second section end and the at least one sidewall. The force measurement sensor can be positioned to generate the force measurement data in response to an axial force applied at the first section end.Type: ApplicationFiled: April 12, 2022Publication date: July 28, 2022Inventors: Jonathan Halse, Abhishek Kar, Jordan Ritchie, Shawn Maurice Dale Durette, Daniel Robert Rogers
-
Patent number: 11354851Abstract: Reference images of an object may be mapped to an object model to create a reference object model representation. Evaluation images of the object may also be mapped to the object model via the processor to create an evaluation object model representation. Object condition information may be determined by comparing the reference object model representation with the evaluation object model representation. The object condition information may indicate one or more differences between the reference object model representation and the evaluation object model representation. A graphical representation of the object model that includes the object condition information may be displayed on a display screen.Type: GrantFiled: March 29, 2021Date of Patent: June 7, 2022Assignee: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Abhishek Kar, Matteo Munaro, Pavel Hanchar, Radu Bogdan Rusu, Santi Arano
-
Patent number: 11326968Abstract: Devices and methods for detecting axial forces applied to a container are provided. The devices can include a device housing, a container section, a force measurement sensor, and a processing section. The device housing can extend between a first housing end and a second housing end along a longitudinal axis. The container section can be mounted to the housing proximate the first housing end. The container section can have an open first section end and a closed second section end spaced apart along the longitudinal axis and at least one sidewall extending therebetween. The container section can define a cavity bounded by the first section end, the second section end and the at least one sidewall. The force measurement sensor can be positioned to generate the force measurement data in response to an axial force applied at the first section end.Type: GrantFiled: July 13, 2020Date of Patent: May 10, 2022Assignee: Smart Skin Technologies Inc.Inventors: Jonathan Halse, Abhishek Kar, Jordan Ritchie, Shawn Maurice Dale Durette, Daniel Robert Rogers
-
Publication number: 20220129601Abstract: A computer system may receive a layout of a data center, the layout of the data center identifying physical locations of a plurality of server racks, electrical distribution feeds, and uninterruptible power supplies. The computer system may receive a fault domain configuration for the datacenter, the fault domain configuration identifying virtual locations of a plurality of logical fault domains for distributing one or more instances so that the instances are stored on independent physical hardware devices within a single availability fault domain. The computer system may determine the configuration for the data center by assigning the plurality of fault domains to a plurality of electrical zones, wherein each electrical zone provides a redundant electrical power supply across the plurality of logical fault domains in an event of a failure of one or more electrical distribution feeds. The computer system may display the configuration for the data center on a display.Type: ApplicationFiled: March 24, 2021Publication date: April 28, 2022Applicant: Oracle International CorporationInventors: Abhishek Kar, Michael Hicks, Christopher Richard Newcombe, Kenneth J. Patchett
-
Publication number: 20220108472Abstract: The pose of an object may be estimated based on fiducial points identified in a visual representation of the object. Each fiducial point may correspond with a component of the object, and may be associated with a first location in an image of the object and a second location in a 3D coordinate pace. A 3D skeleton of the object may be determined by connecting the locations in the 3D space, and the object's pose may be determined based on the 3D skeleton.Type: ApplicationFiled: October 15, 2021Publication date: April 7, 2022Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Pavel HANCHAR, Abhishek KAR, Matteo MUNARO, Krunal Ketan CHANDE, Radu Bogdan RUSU
-
Publication number: 20220060639Abstract: Various embodiments of the present invention relate generally to systems and processes for transforming a style of video data. In one embodiment, a neural network is used to interpolate native video data received from a camera system on a mobile device in real-time. The interpolation converts the live native video data into a particular style. For example, the style can be associated with a particular artist or a particular theme. The stylized video data can viewed on a display of the mobile device in a manner similar to which native live video data is output to the display. Thus, the stylized video data, which is viewed on the display, is consistent with a current position and orientation of the camera system on the display.Type: ApplicationFiled: November 4, 2021Publication date: February 24, 2022Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Abhishek Kar, Pavel Hanchar, Radu Bogdan Rusu, Martin Saelzle, Shuichi Tsutsumi, Stephen David Miller, George Haber
-
Publication number: 20220011182Abstract: Devices and methods for detecting axial forces applied to a container are provided. The devices can include a device housing, a container section, a force measurement sensor, and a processing section. The device housing can extend between a first housing end and a second housing end along a longitudinal axis. The container section can be mounted to the housing proximate the first housing end. The container section can have an open first section end and a closed second section end spaced apart along the longitudinal axis and at least one sidewall extending therebetween. The container section can define a cavity bounded by the first section end, the second section end and the at least one sidewall. The force measurement sensor can be positioned to generate the force measurement data in response to an axial force applied at the first section end.Type: ApplicationFiled: July 13, 2020Publication date: January 13, 2022Inventors: Jonathan Halse, Abhishek Kar, Jordan Ritchie, Shawn Maurice Dale Durette, Daniel Robert Rogers
-
Patent number: 11202017Abstract: Various embodiments of the present invention relate generally to systems and processes for transforming a style of video data. In one embodiment, a neural network is used to interpolate native video data received from a camera system on a mobile device in real-time. The interpolation converts the live native video data into a particular style. For example, the style can be associated with a particular artist or a particular theme. The stylized video data can viewed on a display of the mobile device in a manner similar to which native live video data is output to the display. Thus, the stylized video data, which is viewed on the display, is consistent with a current position and orientation of the camera system on the display.Type: GrantFiled: September 27, 2017Date of Patent: December 14, 2021Assignee: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Abhishek Kar, Pavel Hanchar, Radu Bogdan Rusu, Martin Saelzle, Shuichi Tsutsumi, Stephen David Miller, George Haber
-
Patent number: 11176704Abstract: The pose of an object may be estimated based on fiducial points identified in a visual representation of the object. Each fiducial point may correspond with a component of the object, and may be associated with a first location in an image of the object and a second location in a 3D space. A 3D skeleton of the object may be determined by connecting the locations in the 3D space, and the object's pose may be determined based on the 3D skeleton.Type: GrantFiled: July 22, 2019Date of Patent: November 16, 2021Assignee: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Pavel Hanchar, Abhishek Kar, Matteo Munaro, Krunal Ketan Chande, Radu Bogdan Rusu
-
Publication number: 20210312702Abstract: Images of an object may be captured at a computing device. Each of the images may be captured from a respective viewpoint based on image capture configuration information identifying one or more parameter values. A multiview image digital media representation of the object may be generated that includes some or all of the images of the object and that is navigable in one or more dimensions.Type: ApplicationFiled: June 17, 2021Publication date: October 7, 2021Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Santiago Arano Perez, Abhishek Kar, Matteo Munaro, Pavel Hanchar, Radu Bogdan Rusu, Martin Markus Hubert Wawro, Ashley Wakefield, Rodrigo Ortiz-Cayon, Josh Faust, Jai Chaudhry, Nico Gregor Sebastian Blodow, Mike Penz
-
Publication number: 20210295050Abstract: A multi-view interactive digital media representation (MVIDMR) of an object can be generated from live images of an object captured from a camera. Selectable tags can be placed at locations on the object in the MVIDMR. When the selectable tags are selected, media content can be output which shows details of the object at location where the selectable tag is placed. A machine learning algorithm can be used to automatically recognize landmarks on the object in the frames of the MVIDMR and a structure from motion calculation can be used to determine 3-D positions associated with the landmarks. A 3-D skeleton associated with the object can be assembled from the 3-D positions and projected into the frames associated with the MVIDMR. The 3-D skeleton can be used to determine the selectable tag locations in the frames of the MVIDMR of the object.Type: ApplicationFiled: June 3, 2021Publication date: September 23, 2021Applicant: Fyusion, Inc.Inventors: Chris Beall, Abhishek Kar, Stefan Johannes Josef Holzer, Radu Bogdan Rusu, Pavel Hanchar
-
Publication number: 20210224973Abstract: A background scenery portion may be identified in each of a plurality of image sets of an object, where each image set includes images captured simultaneously from different cameras. A correspondence between the image sets may determined, where the correspondence tracks control points associated with the object and present in multiple images. A multi-view interactive digital media representation of the object that is navigable in one or more dimensions and that includes the image sets may be generated and stored.Type: ApplicationFiled: January 8, 2021Publication date: July 22, 2021Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Abhishek Kar, Matteo Munaro, Pavel Hanchar, Radu Bogdan Rusu
-
Publication number: 20210225038Abstract: Orientation data for image data of an object may be determined. The orientation information may identify camera location and orientation for image data with respect to an object model represented the object at a point in time. A change to the object between different points in time may be identified by identifying a difference in image data associated with different points in time. The change may be presented in a visual representation of the object model in a user interface displayed on a display screen.Type: ApplicationFiled: January 8, 2021Publication date: July 22, 2021Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Abhishek Kar, Matteo Munaro, Pavel Hanchar, Radu Bogdan Rusu
-
Publication number: 20210217158Abstract: Reference images of an object may be mapped to an object model to create a reference object model representation. Evaluation images of the object may also be mapped to the object model via the processor to create an evaluation object model representation. Object condition information may be determined by comparing the reference object model representation with the evaluation object model representation. The object condition information may indicate one or more differences between the reference object model representation and the evaluation object model representation. A graphical representation of the object model that includes the object condition information may be displayed on a display screen.Type: ApplicationFiled: March 29, 2021Publication date: July 15, 2021Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Abhishek Kar, Matteo Munaro, Pavel Hanchar, Radu Bogdan Rusu, Santi Arano
-
Publication number: 20210209836Abstract: A plurality of images may be analyzed to determine an object model. The object model may have a plurality of components, and each of the images may correspond with one or more of the components. Component condition information may be determined for one or more of the components based on the images. The component condition information may indicate damage incurred by the object portion corresponding with the component.Type: ApplicationFiled: February 11, 2021Publication date: July 8, 2021Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Abhishek Kar, Matteo Munaro, Pavel Hanchar, Radu Bogdan Rusu, Santi Arano
-
Patent number: 11055534Abstract: A multi-view interactive digital media representation (MVIDMR) of an object can be generated from live images of an object captured from a camera. Selectable tags can be placed at locations on the object in the MVIDMR. When the selectable tags are selected, media content can be output which shows details of the object at location where the selectable tag is placed. A machine learning algorithm can be used to automatically recognize landmarks on the object in the frames of the MVIDMR and a structure from motion calculation can be used to determine 3-D positions associated with the landmarks. A 3-D skeleton associated with the object can be assembled from the 3-D positions and projected into the frames associated with the MVIDMR. The 3-D skeleton can be used to determine the selectable tag locations in the frames of the MVIDMR of the object.Type: GrantFiled: January 31, 2020Date of Patent: July 6, 2021Assignee: Fyusion, Inc.Inventors: Chris Beall, Abhishek Kar, Stefan Johannes Josef Holzer, Radu Bogdan Rusu, Pavel Hanchar
-
Patent number: 11004188Abstract: Reference images of an object may be mapped to an object model to create a reference object model representation. Evaluation images of the object may also be mapped to the object model via the processor to create an evaluation object model representation. Object condition information may be determined by comparing the reference object model representation with the evaluation object model representation. The object condition information may indicate one or more differences between the reference object model representation and the evaluation object model representation. A graphical representation of the object model that includes the object condition information may be displayed on a display screen.Type: GrantFiled: November 22, 2019Date of Patent: May 11, 2021Assignee: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Abhishek Kar, Matteo Munaro, Pavel Hanchar, Radu Bogdan Rusu
-
Patent number: 10958887Abstract: A sampling density for capturing a plurality of two-dimensional images of a three-dimensional scene may be determined. The sampling density may be below the Nyquist rate. However, the sampling density may be sufficiently high such that captured images may be promoted to multiplane images and used to generate novel viewpoints in a light field reconstruction framework. Recording guidance may be provided at a display screen on a mobile computing device based on the determined sampling density. The recording guidance identifying a plurality of camera poses at which to position a camera to capture images of the three-dimensional scene. A plurality of images captured via the camera based on the recording guidance may be stored on a storage device.Type: GrantFiled: September 18, 2019Date of Patent: March 23, 2021Assignee: Fyusion, Inc.Inventors: Abhishek Kar, Rodrigo Ortiz Cayon, Ben Mildenhall, Stefan Johannes Josef Holzer, Radu Bogdan Rusu