Patents by Inventor Csaba Petre
Csaba Petre has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11113825Abstract: A projected image item tracking system that analyzes projected camera images to determine items taken from, placed on, or moved on a shelf or other area in an autonomous store. The items and actions performed on them may then be attributed to a shopper near the area. Projected images may be combined to generate a 3D volume difference between the state of the area before and after shopper interaction. The volume difference may be calculated using plane-sweep stereo, or using convolutional neural networks. Because these methods may be computationally intensive, the system may first localize a change volume where items appear to have been displaced, and then generate a volume difference only within that change volume. This optimization results in significant savings in power consumption and in more rapid identification of items. The 3D volume difference may also indicate the quantity of items displaced, for example from a vertical stack.Type: GrantFiled: September 6, 2019Date of Patent: September 7, 2021Assignee: ACCEL ROBOTICS CORPORATIONInventors: Marius Buibas, John Quinn, Kaylee Feigum, Csaba Petre, Michael Brandon Maseda, Martin Alan Cseh
-
Patent number: 11049263Abstract: A projected image item tracking system that analyzes projected camera images to determine items taken from, placed on, or moved on a shelf or other area in an autonomous store. The items and actions performed on them may then be attributed to a shopper near the area. Projected images may be combined to generate a 3D volume difference between the state of the area before and after shopper interaction. The volume difference may be calculated using plane-sweep stereo, or using convolutional neural networks. Because these methods may be computationally intensive, the system may first localize a change volume where items appear to have been displaced, and then generate a volume difference only within that change volume. This optimization results in significant savings in power consumption and in more rapid identification of items. The 3D volume difference may also indicate the quantity of items displaced, for example from a vertical stack.Type: GrantFiled: September 6, 2019Date of Patent: June 29, 2021Assignee: ACCEL ROBOTICS CORPORATIONInventors: Marius Buibas, John Quinn, Kaylee Feigum, Csaba Petre, Michael Brandon Maseda, Martin Alan Cseh
-
Patent number: 10818016Abstract: Systems and methods for predictive/reconstructive visual object tracking are disclosed. The visual object tracking has advanced abilities to track objects in scenes, which can have a variety of applications as discussed in this disclosure. In some exemplary implementations, a visual system can comprise a plurality of associative memory units, wherein each associative memory unit has a plurality of layers. The associative memory units can be communicatively coupled to each other in a hierarchical structure, wherein data in associative memory units in higher levels of the hierarchical structure are more abstract than lower associative memory units. The associative memory units can communicate to one another supplying contextual data.Type: GrantFiled: March 19, 2019Date of Patent: October 27, 2020Assignee: Brain CorporationInventors: Filip Piekniewski, Micah Richert, Dimitry Fisher, Patryk Laurent, Csaba Petre
-
Patent number: 10783491Abstract: A system that integrates camera images and quantity sensors to determine items taken from, placed on, or moved on a shelf or other area in an autonomous store. The items and actions performed may then be attributed to a shopper near the area. Shelves may be divided into storage zones, such as bins or lanes, and a quantity sensor may measure the item quantity in each zone. Quantity changes indicate that a shopper has taken or placed items in the zone. Distance sensors, such as LIDAR, may be used for shelves that push items towards the front. Strain gauges may be used for bins or hanging rods. Quantity changes may trigger analysis of camera images of the shelf to identify the items taken or replaced. Images from multiple cameras that view a shelf may be projected to a vertical plane at the front of the shelf to simplify analysis.Type: GrantFiled: February 28, 2020Date of Patent: September 22, 2020Assignee: ACCEL ROBOTICS CORPORATIONInventors: Marius Buibas, John Quinn, Kaylee Feigum, Csaba Petre, Michael Brandon Maseda, Martin Alan Cseh
-
Patent number: 10586208Abstract: A system that integrates camera images and quantity sensors to determine items taken from, placed on, or moved on a shelf or other area in an autonomous store. The items and actions performed may then be attributed to a shopper near the area. Shelves may be divided into storage zones, such as bins or lanes, and a quantity sensor may measure the item quantity in each zone. Quantity changes indicate that a shopper has taken or placed items in the zone. Distance sensors, such as LIDAR, may be used for shelves that push items towards the front. Strain gauges may be used for bins or hanging rods. Quantity changes may trigger analysis of camera images of the shelf to identify the items taken or replaced. Images from multiple cameras that view a shelf may be projected to a vertical plane at the front of the shelf to simplify analysis.Type: GrantFiled: July 16, 2019Date of Patent: March 10, 2020Assignee: ACCEL ROBOTICS CORPORATIONInventors: Marius Buibas, John Quinn, Kaylee Feigum, Csaba Petre, Filip Piekniewski, Aleksander Bapst, Soheyl Yousefisahi, Chin-Chang Kuo
-
Patent number: 10535146Abstract: A projected image item tracking system that analyzes projected camera images to determine items taken from, placed on, or moved on a shelf or other area in an autonomous store. The items and actions performed on them may then be attributed to a shopper near the area. Projected images may be combined to generate a 3D volume difference between the state of the area before and after shopper interaction. The volume difference may be calculated using plane-sweep stereo, or using convolutional neural networks. Because these methods may be computationally intensive, the system may first localize a change volume where items appear to have been displaced, and then generate a volume difference only within that change volume. This optimization results in significant savings in power consumption and in more rapid identification of items. The 3D volume difference may also indicate the quantity of items displaced, for example from a vertical stack.Type: GrantFiled: May 6, 2019Date of Patent: January 14, 2020Assignee: ACCEL ROBOTICS CORPORATIONInventors: Marius Buibas, John Quinn, Kaylee Feigum, Csaba Petre, Michael Brandon Maseda, Martin Alan Cseh
-
Publication number: 20190244365Abstract: Systems and methods for predictive/reconstructive visual object tracking are disclosed. The visual object tracking has advanced abilities to track objects in scenes, which can have a variety of applications as discussed in this disclosure. In some exemplary implementations, a visual system can comprise a plurality of associative memory units, wherein each associative memory unit has a plurality of layers. The associative memory units can be communicatively coupled to each other in a hierarchical structure, wherein data in associative memory units in higher levels of the hierarchical structure are more abstract than lower associative memory units. The associative memory units can communicate to one another supplying contextual data.Type: ApplicationFiled: March 19, 2019Publication date: August 8, 2019Inventors: Filip Piekniewski, Micah Richert, Dimitry Fisher, Patryk Laurent, Csaba Petre
-
Patent number: 10373322Abstract: An autonomous store system that analyzes camera images to track people and their interactions with items using a processor that obtains a 3D model of a store that contains items and item storage areas. Receives images from cameras captured over a time period and analyzes the images and the 3D model of the store to detect a person in the store based on the images, calculates a trajectory of the person, identifies an item storage area proximal to the trajectory of the person during an interaction time period, analyzes two or more images to identify an item within the item storage area that is moved during the interaction time period. The images are captured within or proximal in time to the interaction time period, and the images contain views of the item storage area, and attribute motion of the item to the person. Enables calibration and placement algorithms for cameras.Type: GrantFiled: July 16, 2018Date of Patent: August 6, 2019Assignee: ACCEL ROBOTICS CORPORATIONInventors: Marius Buibas, John Quinn, Kaylee Feigum, Csaba Petre
-
Patent number: 10282849Abstract: Systems and methods for predictive/reconstructive visual object tracking are disclosed. The visual object tracking has advanced abilities to track objects in scenes, which can have a variety of applications as discussed in this disclosure. In some exemplary implementations, a visual system can comprise a plurality of associative memory units, wherein each associative memory unit has a plurality of layers. The associative memory units can be communicatively coupled to each other in a hierarchical structure, wherein data in associative memory units in higher levels of the hierarchical structure are more abstract than lower associative memory units. The associative memory units can communicate to one another supplying contextual data.Type: GrantFiled: June 19, 2017Date of Patent: May 7, 2019Assignee: Brain CorporationInventors: Filip Piekniewski, Micah Richert, Dimitry Fisher, Patryk Laurent, Csaba Petre
-
Patent number: 10282720Abstract: A system that analyzes camera images to track a person from a point where the person obtains an authorization to a different point where the authorization is used. The authorization may be extended in time and space from the point where it was initially obtained. Scenarios enabled by embodiments include automatically opening a locked door or gate for an authorized person and automatically charging items taken by a person to that person's account. Supports automated stores that allow users to enter, take products and exit without explicitly paying. An illustrative application is an automated, unmanned gas station that allows a user to pay at the pump and then enter a locked on-site convenience store or a locked case with products the user can take for automatic purchase. Embodiments may also extend authorization to other people, such as occupants of the same vehicle.Type: GrantFiled: September 21, 2018Date of Patent: May 7, 2019Assignee: ACCEL ROBOTICS CORPORATIONInventors: Marius Buibas, John Quinn, Kaylee Feigum, Csaba Petre, Michael Brandon Maseda, Martin Alan Cseh
-
Patent number: 10282852Abstract: A system that analyzes camera images to track a person in an autonomous store, and to determine when a tracked person takes or moves items in the store. The system may associate a field of influence volume around a person's location; intersection of this volume with an item storage area, such as a shelf, may trigger the system to look for changes in the items on the shelf. Items that are taken from, placed on, or moved on a shelf may be determined by a neural network that processes before and after images of the shelf. Person tracking may be performed by analyzing images from fisheye ceiling cameras projected onto a plane horizontal to the floor. Projected ceiling camera images may be analyzed using a neural network trained to recognize shopper locations. The autonomous store may include modular ceiling and shelving fixtures that contain cameras, lights, processors, and networking.Type: GrantFiled: January 23, 2019Date of Patent: May 7, 2019Assignee: ACCEL ROBOTICS CORPORATIONInventors: Marius Buibas, John Quinn, Kaylee Feigum, Csaba Petre, Michael Brandon Maseda, Martin Alan Cseh
-
Patent number: 10210452Abstract: Apparatus and methods for high-level neuromorphic network description (HLND) framework that may be configured to enable users to define neuromorphic network architectures using a unified and unambiguous representation that is both human-readable and machine-interpretable. The framework may be used to define nodes types, node-to-node connection types, instantiate node instances for different node types, and to generate instances of connection types between these nodes. To facilitate framework usage, the HLND format may provide the flexibility required by computational neuroscientists and, at the same time, provides a user-friendly interface for users with limited experience in modeling neurons. The HLND kernel may comprise an interface to Elementary Network Description (END) that is optimized for efficient representation of neuronal systems in hardware-independent manner and enables seamless translation of HLND model description into hardware instructions for execution by various processing modules.Type: GrantFiled: March 15, 2012Date of Patent: February 19, 2019Assignee: QUALCOMM IncorporatedInventors: Botond Szatmary, Eugene M. Izhikevich, Csaba Petre, Jayram Moorkanikara Nageswaran, Filip Piekniewski
-
Publication number: 20180018775Abstract: Systems and methods for predictive/reconstructive visual object tracking are disclosed. The visual object tracking has advanced abilities to track objects in scenes, which can have a variety of applications as discussed in this disclosure. In some exemplary implementations, a visual system can comprise a plurality of associative memory units, wherein each associative memory unit has a plurality of layers. The associative memory units can be communicatively coupled to each other in a hierarchical structure, wherein data in associative memory units in higher levels of the hierarchical structure are more abstract than lower associative memory units. The associative memory units can communicate to one another supplying contextual data.Type: ApplicationFiled: June 19, 2017Publication date: January 18, 2018Inventors: Filip Piekniewski, Micah Richert, Dimitry Fisher, Patryk Laurent, Csaba Petre
-
Patent number: 9860077Abstract: Computerized appliances may be operated by users remotely. A learning controller apparatus may be operated to determine association between a user indication and an action by the appliance. The user indications, e.g., gestures, posture changes, audio signals may trigger an event associated with the controller. The event may be linked to a plurality of instructions configured to communicate a command to the appliance. The learning apparatus may receive sensory input conveying information about robot's state and environment (context). The sensory input may be used to determine the user indications. During operation, upon determine the indication using sensory input, the controller may cause execution of the respective instructions in order to trigger action by the appliance. Device animation methodology may enable users to operate computerized appliances using gestures, voice commands, posture changes, and/or other customized control elements.Type: GrantFiled: September 17, 2014Date of Patent: January 2, 2018Assignee: Brain CorporationInventors: Patryk Laurent, Csaba Petre, Eugene M. Izhikevich
-
Patent number: 9849588Abstract: Computerized appliances may be operated by users remotely. A learning controller apparatus may be operated to determine association between a user indication and an action by the appliance. The user indications, e.g., gestures, posture changes, audio signals may trigger an event associated with the controller. The event may be linked to a plurality of instructions configured to communicate a command to the appliance. The learning apparatus may receive sensory input conveying information about robot's state and environment (context). The sensory input may be used to determine the user indications. During operation, upon determine the indication using sensory input, the controller may cause execution of the respective instructions in order to trigger action by the appliance. Device animation methodology may enable users to operate computerized appliances using gestures, voice commands, posture changes, and/or other customized control elements.Type: GrantFiled: September 17, 2014Date of Patent: December 26, 2017Assignee: Brain CorporationInventors: Eugene M. Izhikevich, Patryk Laurent, Csaba Petre, Todd Hylton, Vadim Polonichko
-
Patent number: 9821470Abstract: Computerized appliances may be operated by users remotely. In one exemplary implementation, a learning controller apparatus may be operated to determine association between a user indication and an action by the appliance. The user indications, e.g., gestures, posture changes, audio signals may trigger an event associated with the controller. The event may be linked to a plurality of instructions configured to communicate a command to the appliance. The learning apparatus may receive sensory input conveying information about robot's state and environment (context). The sensory input may be used to determine the user indications. During operation, upon determine the indication using sensory input, the controller may cause execution of the respective instructions in order to trigger action by the appliance. Device animation methodology may enable users to operate computerized appliances using gestures, voice commands, posture changes, and/or other customized control elements.Type: GrantFiled: September 17, 2014Date of Patent: November 21, 2017Assignee: Brain CorporationInventors: Patryk Laurent, Csaba Petre, Eugene M. Izhikevich
-
Patent number: 9630317Abstract: Robotic devices may be operated by users remotely. A learning controller apparatus may detect remote transmissions comprising user control instructions. The learning apparatus may receive sensory input conveying information about robot's state and environment (context). The learning apparatus may monitor one or more wavelength (infrared light, radio channel) and detect transmissions from user remote control device to the robot during its operation by the user. The learning apparatus may be configured to develop associations between the detected user remote control instructions and actions of the robot for given context. When a given sensory context occurs, the learning controller may automatically provide control instructions to the robot that may be associated with the given context. The provision of control instructions to the robot by the learning controller may obviate the need for user remote control of the robot thereby enabling autonomous operation by the robot.Type: GrantFiled: April 3, 2014Date of Patent: April 25, 2017Assignee: Brain CorporationInventors: Eugene M. Izhikevich, Patryk Laurent, Micah Richert, Csaba Petre
-
Patent number: 9579790Abstract: Computerized appliances may be operated by users remotely. In one implementation, a learning controller apparatus may be operated to determine association between a user indication and an action by the appliance. The user indications, e.g., gestures, posture changes, audio signals may trigger an event associated with the controller. The event may be linked to a plurality of instructions configured to communicate a command to the appliance. The learning apparatus may receive sensory input conveying information about robot's state and environment (context). The sensory input may be used to determine the user indications. During operation, upon determine the indication using sensory input, the controller may cause execution of the respective instructions in order to trigger action by the appliance. Device animation methodology may enable users to operate computerized appliances using gestures, voice commands, posture changes, and/or other customized control elements.Type: GrantFiled: September 17, 2014Date of Patent: February 28, 2017Assignee: Brain CorporationInventors: Patryk Laurent, Csaba Petre, Eugene M. Izhikevich, Vadim Polonichko
-
Patent number: 9311596Abstract: A simple format is disclosed and referred to as Elementary Network Description (END). The format can fully describe a large-scale neuronal model and embodiments of software or hardware engines to simulate such a model efficiently. The architecture of such neuromorphic engines is optimal for high-performance parallel processing of spiking networks with spike-timing dependent plasticity. Methods for managing memory in a processing system are described whereby memory can be allocated among a plurality of elements and rules configured for each element such that the parallel execution of the spiking networks is most optimal.Type: GrantFiled: March 5, 2014Date of Patent: April 12, 2016Assignee: QUALCOMM TECHNOLOGIES INC.Inventors: Eugene M. Izhikevich, Botond Szatmary, Csaba Petre, Filip Piekniewski, Michael-David Nakayoshi Canoy, Robert Howard Kimball, Jan Krzys Wegrzyn
-
Publication number: 20160075015Abstract: Computerized appliances may be operated by users remotely. A learning controller apparatus may be operated to determine association between a user indication and an action by the appliance. The user indications, e.g., gestures, posture changes, audio signals may trigger an event associated with the controller. The event may be linked to a plurality of instructions configured to communicate a command to the appliance. The learning apparatus may receive sensory input conveying information about robot's state and environment (context). The sensory input may be used to determine the user indications. During operation, upon determine the indication using sensory input, the controller may cause execution of the respective instructions in order to trigger action by the appliance. Device animation methodology may enable users to operate computerized appliances using gestures, voice commands, posture changes, and/or other customized control elements.Type: ApplicationFiled: September 17, 2014Publication date: March 17, 2016Inventors: Eugene M. Izhikevich, Patryk Laurent, Csaba Petre, Todd Hylton, Vadim Polonichko