Patents by Inventor Ivaylo Boyadzhiev
Ivaylo Boyadzhiev has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11514674Abstract: Techniques are described for using computing devices to perform automated operations for determining the acquisition location of an image using an analysis of the image's visual contents. In at least some situations, images to be analyzed include panorama images acquired at acquisition locations in an interior of a multi-room building, and the determined acquisition location information includes a location on a floor plan of the building and in some cases orientation direction information—in at least some such situations, the acquisition location determination is performed without having or using information from any distance-measuring devices about distances from an image's acquisition location to objects in the surrounding building. The acquisition location information may be used in various automated manners, including for controlling navigation of devices (e.g., autonomous vehicles), for display on one or more client devices in corresponding graphical user interfaces, etc.Type: GrantFiled: September 4, 2020Date of Patent: November 29, 2022Assignee: Zillow, Inc.Inventors: Pierre Moulon, Naji Khosravan, Yuguang Li, Yujie Li, Ivaylo Boyadzhiev
-
Patent number: 11501492Abstract: Techniques are described for automated operations to analyze visual data combined from multiple images captured in a room to determine the room shape, such as by iteratively refining alignment of the multiple images' visual data into a common coordinate system until alignment differences satisfy one or more defined criteria, and for subsequently using the determined room shape information in further automated manners. The images may be panorama images in an equirectangular or other spherical format, and determined room shapes for one or more rooms of a building may be fully closed three-dimensional shapes and used to improve navigation of the building (e.g., as part of a generated building floor plan)—the automated room shape determination may be further performed without having or using information from any distance-measuring devices about distances from an image's acquisition location to walls or other objects in the surrounding room.Type: GrantFiled: July 27, 2021Date of Patent: November 15, 2022Assignee: Zillow, Inc.Inventors: Yuguang Li, Will Adrian Hutchcroft, Ivaylo Boyadzhiev, Christopher Buehler
-
Patent number: 11494973Abstract: Techniques are described for using computing devices to perform automated operations for analyzing video (or other image sequences) acquired in a defined area, as part of generating mapping information of the defined area for subsequent use (e.g., for controlling navigation of devices, for display on client devices in corresponding GUIs, etc.). The defined area may include an interior of a multi-room building, and the generated information may include a floor map of the building, such as from an analysis of some or all image frames of the video (e.g., 360° image frames from 360° video) using structure-from-motion techniques to identify objects with associated plane and normal orthogonal information, and then clustering detected planes and/or normals from multiple analyzed images to determine likely wall locations. The generating may be further performed without using acquired depth information about distances from the video capture locations to objects in the surrounding building.Type: GrantFiled: October 4, 2021Date of Patent: November 8, 2022Assignee: Zillow, Inc.Inventors: Ivaylo Boyadzhiev, Pierre Moulon
-
Patent number: 11481925Abstract: Techniques are described for computing devices to perform automated operations to determine the acquisition locations of images, such as within a building interior based on automatically determined shapes of rooms of the building, and for using the determined image acquisition location information in further automated manners. The image may be a panorama image or of another type (e.g., a rectilinear perspective image) and acquired at an acquisition location in a multi-room building's interior, and the determined acquisition location for such an image may be at least a location on the building's floor plan and optionally an orientation/direction for at least a part of the image—in addition, the automated image acquisition location determination may be further performed without having or using information from any depth sensors or other distance-measuring devices about distances from an image's acquisition location to walls or other objects in the surrounding building.Type: GrantFiled: March 15, 2021Date of Patent: October 25, 2022Assignee: Zillow, Inc.Inventors: Yuguang Li, Will Adrian Hutchcroft, Naji Khosravan, Ivaylo Boyadzhiev
-
Publication number: 20220189122Abstract: Techniques are described for using computing devices to perform automated operations related to providing visual information of multiple types in an integrated manner about a building or other defined area. The techniques may include generating and presenting a GUI (graphical user interface) on a client device that includes a computer model of the building's interior with one or more first types of information (e.g., in a first pane of the GUI), and simultaneously presenting other types of related information about the building interior (e.g., in additional separate GUI pane(s)) that is coordinated with the first type(s) of information being currently displayed. The computer model may be a 3D (three-dimensional) or 2.5D representation generated after the house is built and showing the actual house's interior (e.g., walls, furniture, etc.), and may be displayed to a user of a client computing device in a displayed GUI with various user-selectable controls.Type: ApplicationFiled: January 31, 2022Publication date: June 16, 2022Inventors: Yuguang Li, Ivaylo Boyadzhiev, Romualdo Impas
-
Publication number: 20220164493Abstract: Techniques are described for using computing devices to perform automated operations involved in analysis of images acquired in a defined area, as part of generating mapping information of the defined area for subsequent use (e.g., for controlling navigation of devices, for display on client devices in corresponding GUIs, etc.). The defined area may include an interior of a multi-room building, and the generated information including a floor map of the building, such as from an analysis of multiple 360° spherical panorama images acquired at various viewing locations within the building (e.g., using an image acquisition device with a spherical camera having one or more fisheye lenses to capture a panorama image that extends 360 degrees around a vertical axis)—the generating may be further performed without detailed information about distances from the images' viewing locations to objects in the surrounding building.Type: ApplicationFiled: February 7, 2022Publication date: May 26, 2022Inventors: Yuguang Li, Ivaylo Boyadzhiev, Lambert E. Wixson
-
Publication number: 20220114291Abstract: Techniques are described for computing devices to perform automated operations related to using images acquired in a building as part of generating a floor plan for the building, in some cases without using depth information from depth-sensing equipment about distances from the images' acquisition locations to objects in the surrounding building, and for subsequent use in further automated manners, such as controlling navigation of mobile devices and/or for display to end users in a corresponding GUI (graphical user interface). In some cases, the MIGM system interacts with an MIGM system operator user, such as by displaying a GUI showing information related to the images and/or a floor plan being generated, and by receiving and using input submitted by the user via the GUI to assist with the generating of the floor plan, such as to specify interconnections between particular rooms via particular inter-room wall openings of the rooms.Type: ApplicationFiled: October 13, 2020Publication date: April 14, 2022Inventors: Yuguang Li, Pierre Moulon, Lambert E. Wixson, Christopher Buehler, Ivaylo Boyadzhiev
-
Publication number: 20220092227Abstract: Techniques are described for using computing devices to perform automated operations for identifying building floor plans that have attributes satisfying target criteria and for subsequently using the identified floor plans in further automated manners. In at least some situations, the identification of such building floor plans is based on generating and using adjacency graphs generated for the floor plans that represent inter-connections between rooms and other attributes of the buildings, and in some cases is further based on generating and using embedding vectors that concisely represent the information of the adjacency graphs. Information about such identified building floor plans may be used in various automated manners, including for controlling navigation of devices (e.g., autonomous vehicles), for display on client devices in corresponding graphical user interfaces, for further analysis to identify shared and/or aggregate characteristics, etc.Type: ApplicationFiled: September 10, 2021Publication date: March 24, 2022Inventors: Yu Yin, Will A. Hutchcroft, Ivaylo Boyadzhiev, Sing Bing Kang, Yujie Li, Pierre Moulon
-
Publication number: 20220076019Abstract: Techniques are described for using computing devices to perform automated operations for determining the acquisition location of an image using an analysis of the image's visual contents. In at least some situations, images to be analyzed include panorama images acquired at acquisition locations in an interior of a multi-room building, and the determined acquisition location information includes a location on a floor plan of the building and in some cases orientation direction information—in at least some such situations, the acquisition location determination is performed without having or using information from any distance-measuring devices about distances from an image's acquisition location to objects in the surrounding building. The acquisition location information may be used in various automated manners, including for controlling navigation of devices (e.g., autonomous vehicles), for display on one or more client devices in corresponding graphical user interfaces, etc.Type: ApplicationFiled: September 4, 2020Publication date: March 10, 2022Inventors: Pierre Moulon, Naji Khosravan, Yuguang Li, Yujie Li, Ivaylo Boyadzhiev
-
Patent number: 11243656Abstract: Techniques are described for using computing devices to perform automated operations involved in analysis of images acquired in a defined area, as part of generating mapping information of the defined area for subsequent use (e.g., for controlling navigation of devices, for display on client devices in corresponding GUIs, etc.). The defined area may include an interior of a multi-room building, and the generated information including a floor map of the building, such as from an analysis of multiple 360° spherical panorama images acquired at various viewing locations within the building (e.g., using an image acquisition device with a spherical camera having one or more fisheye lenses to capture a panorama image that extends 360 degrees around a vertical axis)—the generating may be further performed without detailed information about distances from the images' viewing locations to objects in the surrounding building.Type: GrantFiled: March 2, 2020Date of Patent: February 8, 2022Assignee: Zillow, Inc.Inventors: Yuguang Li, Grant Gingell, Ivaylo Boyadzhiev
-
Patent number: 11238652Abstract: Techniques are described for using computing devices to perform automated operations related to providing visual information of multiple types in an integrated manner about a building or other defined area. The techniques may include generating and presenting a GUI (graphical user interface) on a client device that includes a computer model of the building's interior with one or more first types of information (e.g., in a first pane of the GUI), and simultaneously presenting other types of related information about the building interior (e.g., in additional separate GUI pane(s)) that is coordinated with the first type(s) of information being currently displayed. The computer model may be a 3D (three-dimensional) or 2.5D representation generated after the house is built and showing the actual house's interior (e.g., walls, furniture, etc.), and may be displayed to a user of a client computing device in a displayed GUI with various user-selectable controls.Type: GrantFiled: October 7, 2020Date of Patent: February 1, 2022Assignee: Zillow, Inc.Inventors: Romualdo Impas, Ivaylo Boyadzhiev, Joshuah Vincent, Yuguang Li, Pierre Moulon
-
Publication number: 20220028156Abstract: Techniques are described for using computing devices to perform automated operations for analyzing video (or other image sequences) acquired in a defined area, as part of generating mapping information of the defined area for subsequent use (e.g., for controlling navigation of devices, for display on client devices in corresponding GUIs, etc.). The defined area may include an interior of a multi-room building, and the generated information may include a floor map of the building, such as from an analysis of some or all image frames of the video (e.g., 360° image frames from 360° video) using structure-from-motion techniques to identify objects with associated plane and normal orthogonal information, and then clustering detected planes and/or normals from multiple analyzed images to determine likely wall locations. The generating may be further performed without using acquired depth information about distances from the video capture locations to objects in the surrounding building.Type: ApplicationFiled: October 4, 2021Publication date: January 27, 2022Inventors: Ivaylo Boyadzhiev, Pierre Moulon
-
Publication number: 20220028159Abstract: Techniques are described for using computing devices to perform automated operations related to, with respect to a computer model of a house or other building's interior, generating and displaying simulated lighting information in the model based on sunlight or other external light that is estimated to enter the building and be visible in particular rooms of the interior under specified conditions, such as using ambient occlusion and light transport matrix calculations. The computer model may be a 3D (three-dimensional) or 2.5D representation that is generated after the house is built and that shows physical components of the actual house's interior (e.g., walls), and may be displayed to a user of a client computing device in a displayed GUI (graphical user interface) via which the user specifies conditions for which the simulated lighting display is generated.Type: ApplicationFiled: October 5, 2021Publication date: January 27, 2022Inventors: Joshuah Vincent, Pierre Moulon, Ivaylo Boyadzhiev, Joshua David Maruska
-
Publication number: 20210377442Abstract: Techniques are described for automated operations involving capturing and analyzing information from an interior of a house, building or other structure, for use in generating and providing a representation of that interior. Such techniques may include using a user's mobile device to capture visual data from multiple viewing locations (e.g., video captured while the mobile device is rotated for some or all of a full 360 degree rotation at each viewing location) within multiple rooms, capturing data linking the multiple viewing locations, analyzing each viewing location's visual data to create a panorama image from that viewing location, analyzing the linking data to determine relative positions/directions between at least some viewing locations, creating inter-panorama links in the panoramas to each of one or more other panoramas based on such determined positions/directions, and providing information to display multiple linked panorama images to represent the interior.Type: ApplicationFiled: July 5, 2021Publication date: December 2, 2021Inventors: Ivaylo Boyadzhiev, Alex Colburn, Li Guan, Qi Shan
-
Patent number: 11165959Abstract: Techniques are described for automated operations involving acquiring and analyzing information from an interior of a house, building or other structure, for use in generating and providing a representation of that interior. Such techniques may include using a user's mobile device to capture video data from multiple viewing locations (e.g., 360° video at each viewing location) within multiple rooms, and capturing data linking the multiple viewing locations (e.g.Type: GrantFiled: October 7, 2020Date of Patent: November 2, 2021Assignee: Zillow, Inc.Inventors: Qi Shan, Alex Colburn, Li Guan, Ivaylo Boyadzhiev
-
Patent number: 11164361Abstract: Techniques are described for using computing devices to perform automated operations for analyzing video (or other image sequences) acquired in a defined area, as part of generating mapping information of the defined area for subsequent use (e.g., for controlling navigation of devices, for display on client devices in corresponding GUIs, etc.). The defined area may include an interior of a multi-room building, and the generated information may include a floor map of the building, such as from an analysis of some or all image frames of the video (e.g., 360° image frames from 360° video) using structure-from-motion techniques to identify objects with associated plane and normal orthogonal information, and then clustering detected planes and/or normals from multiple analyzed images to determine likely wall locations. The generating may be further performed without using acquired depth information about distances from the video capture locations to objects in the surrounding building.Type: GrantFiled: October 26, 2020Date of Patent: November 2, 2021Assignee: Zillow, Inc.Inventors: Pierre Moulon, Ivaylo Boyadzhiev
-
Patent number: 11164368Abstract: Using computing devices to perform automated operations related to, with respect to a computer model of a house or other building's interior, generating and displaying simulated lighting information in the model based on sunlight or other external light that is estimated to enter the building and be visible in particular rooms of the interior under specified conditions, such as using ambient occlusion and light transport matrix calculations. The computer model may be a 3D (three-dimensional) or 2.5D representation that is generated after the house is built and that shows physical components of the actual house's interior (e.g., walls), and may be displayed to a user of a client computing device in a displayed GUI (graphical user interface) via which the user specifies conditions for which the simulated lighting display is generated.Type: GrantFiled: April 6, 2020Date of Patent: November 2, 2021Assignee: Zillow, Inc.Inventors: Joshuah Vincent, Pierre Moulon, Ivaylo Boyadzhiev, Joshua David Maruska
-
Patent number: 11057561Abstract: Techniques are described for automated operations involving capturing and analyzing information from an interior of a house, building or other structure, for use in generating and providing a representation of that interior. Such techniques may include using a user's mobile device to capture visual data from multiple viewing locations (e.g., video captured while the mobile device is rotated for some or all of a full 360 degree rotation at each viewing location) within multiple rooms, capturing data linking the multiple viewing locations, analyzing each viewing location's visual data to create a panorama image from that viewing location, analyzing the linking data to determine relative positions/directions between at least some viewing locations, creating inter-panorama links in the panoramas to each of one or more other panoramas based on such determined positions/directions, and providing information to display multiple linked panorama images to represent the interior.Type: GrantFiled: June 18, 2019Date of Patent: July 6, 2021Assignee: Zillow, Inc.Inventors: Ivaylo Boyadzhiev, Alex Colburn, Li Guan, Qi Shan
-
Publication number: 20210142564Abstract: Techniques are described for using computing devices to perform automated operations related to providing visual information of multiple types in an integrated manner about a building or other defined area. The techniques may include generating and presenting a GUI (graphical user interface) on a client device that includes a computer model of the building's interior with one or more first types of information (e.g., in a first pane of the GUI), and simultaneously presenting other types of related information about the building interior (e.g., in additional separate GUI pane(s)) that is coordinated with the first type(s) of information being currently displayed. The computer model may be a 3D (three-dimensional) or 2.5D representation generated after the house is built and showing the actual house's interior (e.g., walls, furniture, etc.), and may be displayed to a user of a client computing device in a displayed GUI with various user-selectable controls.Type: ApplicationFiled: October 7, 2020Publication date: May 13, 2021Inventors: Romualdo Impas, Ivaylo Boyadzhiev, Joshuah Vincent, Yuguang Li, Pierre Moulon
-
Publication number: 20210125397Abstract: Techniques are described for using computing devices to perform automated operations for analyzing video (or other image sequences) acquired in a defined area, as part of generating mapping information of the defined area for subsequent use (e.g., for controlling navigation of devices, for display on client devices in corresponding GUIs, etc.). The defined area may include an interior of a multi-room building, and the generated information may include a floor map of the building, such as from an analysis of some or all image frames of the video (e.g., 360° image frames from 360° video) using structure-from-motion techniques to identify objects with associated plane and normal orthogonal information, and then clustering detected planes and/or normals from multiple analyzed images to determine likely wall locations. The generating may be further performed without using acquired depth information about distances from the video capture locations to objects in the surrounding building.Type: ApplicationFiled: October 26, 2020Publication date: April 29, 2021Inventors: Pierre Moulon, Ivaylo Boyadzhiev