PROPERTY INSPECTION SYSTEM AND METHOD
A computer system and method for performing property inspections. Digital media is received in a computer vision image analysis system from one or more user devices via a network and a determination is made regarding an environment type associated with the received digital media. One or more objects are determined that are located in the determined environment and which are present in the received digital media. A determination is made regarding an absence of objects in the received digital media contingent upon the determined environment type based upon a set of rules. Certain implementations of the disclosed technology can include systems and methods to train and prompt a user to narrate loss details while capturing an audiovisual record of damage or loss. Certain implementations may further utilize artificial intelligence analysis of captured audiovisual data.
This application is a Continuation-in-Part application claiming priority under 35 U.S.C. § 120 to U.S. patent application Ser. No. 18/509,423, filed 15 Nov. 2023, and published as U.S. Patent Application Publication US20240087061 on 14 Mar. 2024, which claims priority to U.S. patent application Ser. No. 18/045,861, filed 12 Oct. 2022, and issued as U.S. Pat. No. 11,854,100 on 26 Dec. 2023, which claims priority to U.S. patent application Ser. No. 16/276,405, filed 14 Feb. 2019 and issued as U.S. Pat. No. 11,494,857 on 8 Nov. 2022, which claims priority to U.S. Provisional Patent Application Ser. No. 62/631,266, filed 15 Feb. 2018, the contents of which are incorporated herein by reference in its entirety as if presented in full.
FIELD OF THE INVENTIONThe disclosed embodiments generally relate to systems and methods for capturing first notice of loss (FNOL) reports, and more particularly, to train and prompt a user to narrate loss details while capturing an audiovisual record of damage or loss. Certain implementations of the disclosed technology can include generating conversational flows to assist the user and/or utilizing artificial intelligence analysis of captured audiovisual data.
BACKGROUND OF THE INVENTIONWhen an insurance provider offers insurance for a home or business, they are taking on the risk that any damage or liability associated with that property can be offset by premium payments made by property owners. In order to create a good balance between offering competitive prices and managing risk, an insurance provider may wish to assess the relative risk of each potential insurable property and/or receive and assess documentation of actual claim loss.
Property inspection and/or loss reports historically required a trained professional to physically travel to a property to conduct a comprehensive property assessment while documenting important details in a report or a series of reports. This process has proven inefficient and requires training of professional as well as travel time and expenses for transportation and inspection labor. In some scenarios, property inspections are not performed at all (e.g., sight unseen), thus insurance providers and other parties expose themselves to an unnecessary level of risk.
With the proliferation of mobile computing devices, such as tablets and smartphones, end customers can now capture and document property damage themselves instead of relying on professional inspectors. However, it has been demonstrated that the ideal documenting process of simultaneously narrating while capturing video to report a loss, etc., can be an odd and non-intuitive process for users, particularly when they are using their rear-facing camera. In test groups, users would not narrate a video if they could not see themselves on screen, even when clear instructions to do so were provided.
A need exists for ways to train and prompt a user to narrate loss details while capturing an audiovisual record of damage or loss so that the appropriate information is captured.
SUMMARY OF THE INVENTIONThe purpose and advantages of the below described illustrated embodiments will be set forth in and apparent from the description that follows. Additional advantages of the illustrated embodiments will be realized and attained by the devices, systems and methods particularly pointed out in the written description and claims hereof, as well as from the appended drawings.
In accordance with certain implementations of the disclosed technology, computer-implemented methods, systems, and non-volatile computer readable media is provided for initiating and capturing audiovisual documentation of an environment. The method, system, and/or computer readable media is configured for receiving, at a mobile computing device, an input command to initiate capturing audiovisual documentation of an environment; outputting, from the mobile computing device: instructions for a user to utter one or more test phrases; and a user training progress indicator configured to advance responsive to an audible detection of the one or more test phrases. While in a training phase, the method, system, and/or computer readable media is configured for receiving audible input corresponding to the instructions; and advancing the user training progress indicator responsive to the received audible input; and responsive to receiving a predetermined threshold amount of audible input, switching to an audiovisual capturing phase. While in the audiovisual capturing phase, the method, system, and/or computer readable media is configured for capturing, by the mobile computing device, audiovisual documentation comprising video and user-narrated audio of the environment.
An illustrative embodiment involves an insurance provider receiving information about an insured property preferably from a user of a smart device located in an area of the property, where the information is indicative of risk associated with the property. Based on the received information, the insurance provider determines a risk-adjusted insurance premium for the property to adjust for the indicated risk. In particular, the illustrated embodiment provides an Artificial Intelligence (AI) assistant for the underwriting process. In particular, the AI assistant guides users, preferably via a conversational flow process, through an underwriting and inspection process such that any user of a smart device having a camera can capture property information to be utilized in an underwriting process without resort to costly trained professionals. Insurance providers and other parties that benefit from such property data are thus enabled to gather important property information faster and more affordably then previously accomplished with trained professionals for determining property value and risk exposure.
The accompanying appendices and/or drawings illustrate various non-limiting, example, inventive aspects in accordance with the present disclosure:
The illustrated embodiments are now described more fully with reference to the accompanying drawings wherein like reference numerals identify similar structural/functional features. The illustrated embodiments are not limited in any way to what is illustrated as the illustrated embodiments described below are merely exemplary, which can be embodied in various forms, as appreciated by one skilled in the art. Therefore, it is to be understood that any structural and functional details disclosed herein are not to be interpreted as limiting, but merely as a basis for the claims and as a representation for teaching one skilled in the art to variously employ the discussed embodiments. Furthermore, the terms and phrases used herein are not intended to be limiting but rather to provide an understandable description of the illustrated embodiments.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. Although any methods and materials similar or equivalent to those described herein can also be used in the practice or testing of the illustrated embodiments, exemplary methods and materials are now described.
It must be noted that as used herein and in the appended claims, the singular forms “a”, “an,” and “the” include plural referents unless the context clearly dictates otherwise. Thus, for example, reference to “a stimulus” includes a plurality of such stimuli and reference to “the signal” includes reference to one or more signals and equivalents thereof known to those skilled in the art, and so forth.
It is to be appreciated the illustrated embodiments discussed below are preferably a software algorithm, program or code residing on computer useable medium having control logic for enabling execution on a machine having a computer processor. The machine typically includes memory storage configured to provide output from execution of the computer algorithm or program.
As used herein, the term “software” is meant to be synonymous with any code or program that can be in a processor of a host computer, regardless of whether the implementation is in hardware, firmware or as a software computer product available on a disc, a memory storage device, or for download from a remote machine. The embodiments described herein include such software to implement the equations, relationships and algorithms described above. One skilled in the art will appreciate further features and advantages of the illustrated embodiments based on the above-described embodiments. Accordingly, the illustrated embodiments are not to be limited by what has been particularly shown and described, except as indicated by the appended claims.
Certain implementations of the disclosed technology may be utilized to prompt users to properly record damage or loss while narrating the scene using their mobile computing device's rear-facing camera. Experiments with user groups indicate that users are typically not accustomed to narrating during video capture if they cannot see themselves on-screen, even when provided clear instructions or prompts such as a live microphone volume display. One working theory consistent with the experiments is that users feel they are recording “what is on screen,” and if they can't see themselves in video being captured, they may believe that their voice can't be recorded and hence there is no need to speak.
Certain implementations of the disclosed technology provide a system and method to prompt a user to speak while capturing video by inserting a user training progress indicator in the video. In certain implementations, the user training progress indicator may be configured to advance (such as filling a bar, circle, or other indicator) responsive to an audible detection of one or more phrases. For example, while in a training phase, the mobile computing device may receive audible input corresponding to instructions to utter one or more test phrases. The user training progress indicator may advance responsive to the received audible input. Responsive to receiving a predetermined threshold amount of audible input, the mobile computing device may switch to an audiovisual capturing phase for documenting the loss. Experiments show that the response from the users is extremely consistent. For example, users first appeared confused, but after reading the instructions, they uttered test phrases which in turn advanced the progress indicator. Once the users saw the connection between the progress indicator and their speech on screen, they quickly continued speaking to advance the progress indicator, and they continued speaking during the audiovisual capturing phase for documenting the loss. After adding the step of the user training progress indicator to the recording process, 100% of users in the test group correctly narrated the details of the loss while recording a video of the loss with the rear camera of their mobile computing device.
Turning now descriptively to the drawings, in which similar reference characters denote similar elements throughout the several views,
It is to be understood a communication network 100 is a geographically distributed collection of nodes interconnected by communication links and segments for transporting data between end nodes, such as personal computers, work stations, smart phone devices, tablets, televisions, sensors and or other devices such as automobiles, etc. Many types of networks are available, with the types ranging from local area networks (LANs) to wide area networks (WANs). LANs typically connect the nodes over dedicated private communications links located in the same general physical location, such as a building or campus. WANs, on the other hand, typically connect geographically dispersed nodes over long-distance communications links, such as common carrier telephone lines, optical lightpaths, synchronous optical networks (SONET), synchronous digital hierarchy (SDH) links, or Powerline Communications (PLC), and others.
As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
Aspects of the present invention are described below with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
Device 200 is intended to represent any type of computer system capable of carrying out the teachings of various embodiments of the present invention. Device 200 is only one example of a suitable system and is not intended to suggest any limitation as to the scope of use or functionality of embodiments of the invention described herein. Regardless, computing device 200 is capable of being implemented and/or performing any of the functionality set forth herein.
Computing device 200 is operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well-known computing systems, environments, and/or configurations that may be suitable for use with computing device 200 include, but are not limited to, personal computer systems, server computer systems, thin clients, thick clients, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputer systems, and distributed data processing environments that include any of the above systems or devices, and the like.
Computing device 200 may be described in the general context of computer system-executable instructions, such as program modules, being executed by a computer system. Generally, program modules may include routines, programs, objects, components, logic, data structures, and so on that perform particular tasks or implement particular abstract data types. Computing device 200 may be practiced in distributed data processing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed data processing environment, program modules may be located in both local and remote computer system storage media including memory storage devices.
Device 200 is shown in
The components of device 200 may include, but are not limited to, one or more processors or processing units 216, a system memory 228, and a bus 218 that couples various system components including system memory 228 to processor 216.
Bus 218 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus. Computing device 200 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by device 200, and it includes both volatile and non-volatile media, removable and non-removable media.
System memory 228 can include computer system readable media in the form of volatile memory, such as random access memory (RAM) 230 and/or cache memory 232. Computing device 200 may further include other removable/non-removable, volatile/non-volatile computer system storage media. By way of example only, storage system 234 can be provided for reading from and writing to a non-removable, non-volatile magnetic media (not shown and typically called a “hard drive”). Although not shown, a magnetic disk drive for reading from and writing to a removable, non-volatile magnetic disk (e.g., a “floppy disk”), and an optical disk drive for reading from or writing to a removable, non-volatile optical disk such as a CD-ROM, DVD-ROM or other optical media can be provided. In such instances, each can be connected to bus 218 by one or more data media interfaces. As will be further depicted and described below, memory 228 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
Program/utility 240, having a set (at least one) of program modules 215, such as underwriting module, may be stored in memory 228 by way of example, and not limitation, as well as an operating system, one or more application programs, other program modules, and program data. Each of the operating system, one or more application programs, other program modules, and program data or some combination thereof, may include an implementation of a networking environment. Program modules 215 generally carry out the functions and/or methodologies of embodiments of the invention as described herein.
Device 200 may also communicate with one or more external devices 214 such as a keyboard, a pointing device, a display 224, etc.; one or more devices that enable a user to interact with computing device 200; and/or any devices (e.g., network card, modem, etc.) that enable computing device 200 to communicate with one or more other computing devices. Such communication can occur via Input/Output (I/O) interfaces 222. Still yet, device 200 can communicate with one or more networks such as a local area network (LAN), a general wide area network (WAN), and/or a public network (e.g., the Internet) via network adapter 220. As depicted, network adapter 220 communicates with the other components of computing device 200 via bus 218. It should be understood that although not shown, other hardware and/or software components could be used in conjunction with device 200. Examples, include, but are not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data archival storage systems, etc.
With the exemplary communication network 100 (
What is to be discussed below is a technological improvement to existing computer systems, particularly (but not limited to) in an underwriting process, that analyzes and processes data extracted from a user device 105 in the Computer Vision API 310 and Conversational AI/Bot Service System 320 so as to preferably determine and generate a report that is enriched with internal and external datasets in order to add pricing/value data and estimates of risk exposure.
It is to be appreciated that system 300 is described herein for illustrative use with insurance underwriting tasks, however system 300 is not to be understood to be limited to use with insurance underwriting as it may be used in with any applicable applications/usage environments. For instance, one such other use includes the moving industry wherein system 300 is configured to identify and catalogue a set of contents (e.g., a home, office, etc.), which then can be used to provide a detailed report of contents to be moved as well as their current value (as to be appreciated below).
With reference now to
As mentioned above, the user device 105 is to be understood to encompass a portable computer device preferably having network connection components (e.g., a cellular network transceiver), a display (e.g., a touchscreen display) and camera configured to capture both photographs and video. In accordance with the preferred illustrated embodiment of system 300, the user device 105 is to be understood to be either a smartphone device or a tablet device.
The Computer System AI API 310 is preferably configured to interact with a user device 105 so as to receive captured media (e.g., photographs and/or video) from the user device 105 to perform analysis thereon. In the preferred illustrated embodiment the AI API 310 is configured to perform insurance inspection recognition tasks on received media (as to be described herein) but it is not to be understood to be limited to only performing insurance inspection recognition tasks. In the preferred embodiment, the AI API 310 is configured, preferably using AI, to detect an environment the received media is associated with (e.g., a kitchen, living room, bedroom, garage, outside structure, roof, etc.) and more specifically objects located in that environment (e.g., stove, refrigerator, fireplace, lighting components, drapery, outdoor structure material, location of structure to nearby environmental elements (e.g., standing or still water, shrubbery, landscape grade, recreational objects (e.g., swimming pools, trampolines and the like) etc.). The AI API 310 is further configured, preferably using AI, to determine the absence of objects in a particular environment. For instance, if the environment is a kitchen, the AI API 310 may determine the absence of a fire extinguisher or if the environment is a bedroom, the AI API 310 may determine the absence of fire/smoke/CO2 detectors, or if the environment is a swimming pool the AI API 310 may determine the absence of a fence and/or certain safety equipment (e.g., life vests).
The AI API 310 is further configured to interact with the Conversational Al/Bot Service System 320 so as to essentially indicate what, and what has not, been detected by the AI API 310 upon analysis of the media received from user device 310. This information/data enables the Conversational Al/Bot Service System 320, preferably using a set of preconfigured rules, to determine a conversational flow to be presented to the user device 105 regarding requested follow-up information. For instance, if the detected environment of the received media is a kitchen and the AI API 310 is unable to detect from the received media the make/model of certain detected kitchen appliances (e.g., a stove and a refrigerator) and also the absence of certain objects (e.g., a fire extinguisher, smoke/heat/CO2 detectors), the AI API 310 then indicates the need for this additional information to the Conversational Al/Bot Service System 320. For illustrative purposes, another example of the capabilities of the AI API 310 includes if the detected environment (e.g., a living room, bedroom, etc.) includes a fireplace, the absence of protective fireplace doors/screen may be determined in addition to detection of objects possibly having flammable characteristics located in close proximity to the fireplace (e.g., furniture, drapery, etc.). In this scenario, the Conversational Al/Bot Service System 320 using its set of rules would then format a conversational flow to the user device 105 requesting what, if any, type of fire protection is provided on the fireplace and what type of material is used in the objects detected in close proximity to the fireplace.
It is thus to be appreciated that the Conversational Al/Bot Service System 320 is configured to, preferably using a set of rules, to utilize the aforesaid information provided by the AI API 310 to format a conversational flow for the user device 105. It is to be understood this conversational flow may encompass chat formats (including conversation bubbles), SMS, MSM, email, messaging and audible and/or video communication types with the user device 105. Examples of which are provided below. It is to be further understood, the AI API 310 is also configured to instruct the Conversational Al/Bot Service System 320 to determine and change the user interaction experience/conversation on the user device 105 so as to adapt based upon what is being seen by the camera and data provided by the user device 105.
The records database 330 preferably receives and stores the information determined from the AI API 310 and the Conversational AI/Bot Service System 320. With regards to the insurance industry, this stored information can be used for underwriting purposes (e.g., determine risk and premiums, premium renewals, claims determinations and adjustments, and other tasks associated with insurance underwriting. The records database 330 may further be configured to generate to a report of the premise to be insured).
With the certain components of an illustrated embodiment described above, with reference now to
Once the captured media is transmitted (step 440) by the user device 105 and is received by the AI API 310 in system 300, it is preferably parsed by the AI API 310 using artificial intelligence techniques, step 450, to determine objects (including object materials and condition), and the absence of objects, in a subject environment (e.g., a kitchen), as described above. As also described above, the AI API 310 provides this information to the Conversational AI/Bot Service System 320, step 460. As also mentioned above, preferably using preconfigured rules, the Conversational Al/Bot Service System 320 formats a conversation for the user device 105 (FIG. 5C) requesting additional information from the user device 105 regarding the initiated insurance underwriting task, step 470. The data responsive to the presented conversation from Conversational Al/Bot Service System 320 is then preferably sent back from user device 105 to AI API 310 for parsing and analysis, step 480. This aforesaid process is preferably continued until system 300 determines no more relevant data is to be gained from the user device 105.
It is to be appreciated the aforesaid process can be performed on a real-time basis, wherein a user of user device 105 can be capturing video that is being simultaneously analyzed by system 300. For instance, while a user is capturing video of a kitchen, conversation bubbles (the “conversation flow” sent by Conversational AI/Bot Service System 320) will appear on the user's device 105 requested certain information (e.g., make/model of an appliance, request capture of a fire extinguisher and/or other safety equipment).
Data that is captured during the assistive and adaptive workflow as described above is preferably stored in database 330, step 490. It is to be appreciated this stored data may be formatted in a comprehensive report wherein the stored data is enriched with value estimates and risk projections using information captured and identified by the AI API 310 and user input, which may include third party data. An example of such a report includes: property address; date of report; property contents; conditions; material; and risk items. Using external and internal data sets, the report is enriched such that estimates of value and the amount of risk exposure are then added to the report.
Certain implementations of the disclosed technology can include transmitting the audiovisual documentation to a remote server.
In certain implementations, the video portion of the audiovisual documentation may be captured, at least in part, by a rear-facing camera of the mobile computing device.
In certain implementations, the user training progress indicator may be configured to provide visual or audible feedback to the user to prompt the user to narrate the audiovisual documentation.
In accordance with certain exemplary implementations of the disclosed technology, the audiovisual documentation can include a representation of damage to a structure in the environment. In certain implementations, the audiovisual documentation can include a representation of damage to a vehicle.
In accordance with certain exemplary implementations of the disclosed technology, the user training progress indicator may include one or more of audible and visual information.
Certain implementations of the disclosed technology can include detecting one or more objects located in the environment based upon a set of rules. In certain implementations, the set of rules may cause an analysis to determine whether the one or more objects match one or more objects predetermined to be present in the environment. In certain implementations, the one or more rules may be used to assess whether one or more predetermined specifications can be determined from analysis of the one or more objects.
Certain implementations of the disclosed technology include can include outputting, from the mobile computing device, user instructions for capturing the audiovisual documentation of the environment.
It is to be appreciated the systems and methods disclosed herein can provide technical and functional improvements over existing computer systems, including, but not limited to providing a computer platform that enables property inspection to be performed by a user of a smart device who is not previously trained in property inspections, without sacrificing quality. The disclosed technology also provides a computing platform that enables property inspections and/or loss reports to be performed in a more time- and cost-efficient manner as compared to employing trained property inspectors. Insurance providers and other parties can rely on actual property data rather than high-level analytics and assumptions. It further provides a computing platform that enables insurance providers and other parties to accurately quote insurance coverage faster and in a more personalized/tailored way, ensuring the appropriate price and level of coverage such that insurance carriers have an accurate understanding of exposure to risk and property value.
With certain illustrated embodiments described above, it is to be appreciated that various non-limiting embodiments described herein may be used separately, combined or selectively combined for specific applications. Further, some of the various features of the above non-limiting embodiments may be used without the corresponding use of other described features. The foregoing description should therefore be considered as merely illustrative of the principles, teachings and exemplary embodiments of this invention, and not in limitation thereof.
It is to be understood that the above-described arrangements are only illustrative of the application of the principles of the illustrated embodiments. Numerous modifications and alternative arrangements may be devised by those skilled in the art without departing from the scope of the illustrated embodiments, and the appended claims are intended to cover such modifications and arrangements.
Claims
1. A computer-implemented method, comprising:
- receiving, at a mobile computing device, an input command to initiate capturing audiovisual documentation of an environment;
- outputting, from the mobile computing device: instructions for a user to utter one or more test phrases; and a user training progress indicator configured to advance responsive to an audible detection of the one or more test phrases;
- while in a training phase; receiving audible input corresponding to the instructions; and advancing the user training progress indicator responsive to the received audible input; and responsive to receiving a predetermined threshold amount of audible input, switching to an audiovisual capturing phase;
- while in the audiovisual capturing phase: capturing, by the mobile computing device, audiovisual documentation comprising video and user-narrated audio of the environment.
2. The computer-implemented method of claim 1, further comprising transmitting the audiovisual documentation to a remote server.
3. The computer-implemented method of claim 1, wherein the audiovisual documentation is captured, at least in part, by a rear-facing camera of the mobile computing device.
4. The computer-implemented method of claim 1, wherein the user training progress indicator is configured to provide visual or audible feedback to the user to prompt the user to narrate the audiovisual documentation.
5. The computer-implemented method of claim 1, wherein the audiovisual documentation comprises damage to a structure in the environment.
6. The computer-implemented method of claim 1, wherein the audiovisual documentation comprises damage to a vehicle.
7. The computer-implemented method of claim 1, wherein the user training progress indicator comprises one or more of audible and visual information.
8. The computer-implemented method of claim 1, further comprising detecting one or more objects located in the environment based upon a set of rules, wherein the set of rules cause an analysis to determine whether the one or more objects match one or more objects predetermined to be present in the environment and whether one or more predetermined specifications can be determined from analysis of the one or more objects.
9. The computer-implemented method of claim 1, further comprising outputting, from the mobile computing device, user instructions for capturing the audiovisual documentation of the environment.
10. A computer system for performing capturing audiovisual documentation of an environment, comprising:
- a mobile computing device, comprising: a camera configured to capture video; a microphone configured to capture audio; one or more processors in communication with the camera and the microphone; a first memory configured for storing captured video and audio; a second memory storing computer code that causes the one or more processors to: receive an input command to initiate capturing audiovisual documentation of an environment; output, from the mobile computing device: instructions for a user to utter one or more test phrases; and a user training progress indicator configured to advance responsive to an audible detection of the one or more test phrases; while in a training phase; receive audible input corresponding to the instructions; and advance the user training progress indicator responsive to the received audible input; and responsive to receiving a predetermined threshold amount of audible input, switching to an audiovisual capturing phase; while in the audiovisual capturing phase: capturing, by the mobile computing device, audiovisual documentation comprising video and user-narrated audio of the environment.
11. The computer system of claim 10, further comprising a property inspection server system coupled to a network in communication with the mobile computing device and configured to generate a report comprising an estimate of loss based on a condition of one or more objects in the environment.
12. The computer system of claim 10, wherein the environment comprises one or more of a kitchen, a living room, a bedroom, a bathroom, a garage, an outside structure, and a roof.
13. The computer system of claim 10, further comprising a feature extractor system that utilizes one or more of an Artificial Intelligence (AI) API and a set of rules to analyze tone or more objects in the environment to determine whether the one or more objects match one or more objects predetermined to be present in the environment.
14. The computer system of claim 10, wherein the computer code further causes the one or more processors to transmit the audiovisual documentation to a remote server.
15. The computer system of claim 10, wherein the mobile computing device is configured to capture the audiovisual documentation at least in part by a rear facing camera.
16. The computer system of claim 10, wherein the user training progress indicator is configured to provide visual or audible feedback to the user to prompt the user to narrate the audiovisual documentation.
17. The computer system of claim 10, wherein the audiovisual documentation comprises damage to a structure in the environment.
18. A non-transitory computer-readable storage medium storing computer code configured to cause one or more processors to perform a method of: instructions for a user to utter one or more test phrases; and a user training progress indicator configured to advance responsive to an audible detection of the one or more test phrases;
- receiving, at a mobile computing device, an input command to initiate capturing audiovisual documentation of an environment;
- outputting, from the mobile computing device:
- while in a training phase; receiving audible input corresponding to the instructions; and advancing the user training progress indicator responsive to the received audible input; and responsive to receiving a predetermined threshold amount of audible input, switching to an audiovisual capturing phase;
- while in the audiovisual capturing phase: capturing, by the mobile computing device, audiovisual documentation comprising video and user-narrated audio of the environment.
19. The non-transitory computer-readable storage medium of claim 18, wherein the computer code is further configured to cause the one or more processors to output one or more of visual and audible feedback to the user to prompt the user to narrate the audiovisual documentation.
20. The non-transitory computer-readable storage medium of claim 18, wherein the computer code is further configured to cause the one or more processors to detect one or more objects located in the environment based upon a set of rules, wherein the set of rules cause an analysis to determine whether the one or more objects match one or more objects predetermined to be present in the environment and whether one or more loss specifications can be determined from analysis of the one or more objects.
Type: Application
Filed: Jun 11, 2024
Publication Date: Oct 3, 2024
Inventors: Cole Winans (Vanzant, MO), Brian Keller (Columbia, MD), Angel Ai Jun Lam (Denver, CO), Victor Palmer (College Station, TX)
Application Number: 18/739,717