System, method and computer program product for predicting a next hop in a search path
A system, method, and computer program product for identifying consumer items more likely to be bought by an individual user. In some embodiments, a collaborative filter may be used to rank items based on the degree to which they match user preferences. The collaborative filter may be hierarchical and may take various factors into consideration. Example factors may include the similarity among items based on observable features, a summary of aggregate online search behavior across multiple users, the item features determined to be most important to the individual user, and a baseline item against which a conditional probability of another item being selected is measured.
Latest TrueCar, Inc. Patents:
- System and method for dealer evaluation and dealer network optimization using spatial and geographic analysis in a network of distributed computer systems
- System and method for determination and use of spatial and geography based metrics in a network of distributed computer systems
- System, method and computer program for varying affiliate position displayed by intermediary
- System and method for correlating and enhancing data obtained from distributed sources in a network of distributed computer systems
- System and method for aggregation, analysis, presentation and monetization of pricing data for vehicles and other commodities
This application is a continuation of, and claims a benefit of priority under 35 U.S.C. § 120 of the filing date of U.S. patent application Ser. No. 16/240,339, filed Jan. 4, 2019, issued as U.S. Pat. No. 11,361,331, entitled, “SYSTEM, METHOD AND COMPUTER PROGRAM PRODUCT FOR PREDICTING A NEXT HOP IN A SEARCH PATH,” which is a continuation of, and claims a benefit of priority under 35 U.S.C. § 120 of the filing date of U.S. patent application Ser. No. 15/297,883, filed Oct. 19, 2016, issued as U.S. Pat. No. 10,210,534, entitled “SYSTEM, METHOD AND COMPUTER PROGRAM PRODUCT FOR PREDICTING ITEM PREFERENCE USING REVENUE-WEIGHTED COLLABORATIVE FILTER,” which is a continuation of U.S. patent application Ser. No. 14/152,884, filed Jan. 10, 2014, issued as U.S. Pat. No. 9,508,084, entitled “SYSTEM, METHOD AND COMPUTER PROGRAM PRODUCT FOR PREDICTING ITEM PREFERENCE USING REVENUE-WEIGHTED COLLABORATIVE FILTER,” which is a continuation of U.S. patent application Ser. No. 13/173,332, filed Jun. 30, 2011, issued as U.S. Pat. No. 8,661,403, entitled “SYSTEM, METHOD AND COMPUTER PROGRAM PRODUCT FOR PREDICTING ITEM PREFERENCE USING REVENUE-WEIGHTED COLLABORATIVE FILTER,” which are fully incorporated by reference herein for all purposes.
TECHNICAL FIELDThis disclosure relates generally to collaborative filters used in a marketplace. More particularly, embodiments disclosed herein relate to a system, method, and computer program product embodying a collaborative filter for identifying certain consumer items.
BACKGROUND OF THE RELATED ARTCurrently, collaborative filters exist to serve a marketplace where an item is relatively low-priced (e.g., packaged groceries), consumed frequently (e.g., movie rentals), bundled with other items (e.g., a consumer who buys a razor may also buy replacement blades), and/or there is little measurable similarity between items (e.g., books). When a consumer does not have information relevant to a specifically desired product or does not understand such information, the consumer can be at a serious negotiation disadvantage. Exacerbating this problem is the fact that complex, negotiated transactions can be difficult for consumers to understand due to a variety of factors, including interdependence between local demand and availability of products or product features, the point-in-time in the product lifecycle at which a transaction occurs, and the interrelationships of various transactions to one another. For example, a seller may sacrifice margin on one aspect of one transaction and recoup that margin from another transaction with the same (or a different) customer.
For items involving complex transactions, currently available data is generally single dimensional. To illustrate with a specific example, a recommended price (e.g., $20,000) for item A may not take into account how sensitive that price is (“Is $19,000 a good or bad price for this item?”) or how item A compares to item B at about the same price. Consequently, there is always room for improvement.
SUMMARY OF THE DISCLOSUREEmbodiments disclosed herein provide a system, method, and computer program product for identifying consumer items more likely to be bought by an individual user. In some embodiments, a collaborative filter may be used to rank items based on the degree to which they match user preferences. The collaborative filter may be hierarchical and may take various factors into consideration. Example factors may include the similarity among items based on observable features, a summary of aggregate online search behavior across multiple users, the item features determined to be most important to the individual user, and a baseline item against which a conditional probability of another item being selected is measured. Example observable features may include, but are not limited to, price, color, size, etc.
In some embodiments, a method of identifying consumer items more likely to be purchased by an individual user may comprise determining a similarity among items based on observable features. In some embodiments, a collaborative filter disclosed herein may comprise a plurality of software components, including a first component for determining a similarity among items based on observable features. In some embodiments, the first component of the filter may be configured to compute individual feature difference between a first observation and a second observation, compute a composite similarity between the first observation and the second observation, and repeat these computations for all possible values of the first observation and the second observation.
In some embodiments, a method of identifying consumer items more likely to be purchased by an individual user may further comprise aggregating online search or item discovery behavior across multiple users. In some embodiments, a collaborative filter disclosed herein may further comprise a second component for aggregating online search behavior across multiple users. The second component of the filter may be implemented in various ways. For example, in one embodiment, the second component may be configured to collect item view frequencies only for each “hop” in a search path (a sequence of item discovery). In another embodiment, the second component may be configured to collect item view frequencies only for all “hops” in the search path. In yet another embodiment, the second component may be configured to collect item view frequencies only for all pairs of items in the search path, regardless of the order in which they were searched.
In some embodiments, a method of identifying consumer items more likely to be purchased by an individual user may further comprise determining item features that are most important to the individual user. It is possible that a new user will exhibit item discovery behavior that is distinctly different from other users. The filter disclosed herein may include a third component that can determine what features may be the most important to the individual user through the new user's item discovery behavior. In some embodiments, a weighting algorithm is utilized. When a user establishes he/her baseline item, the weights assigned to each feature are the same as those used to compute the similarity among items. The initial feature weights can be heuristically determined. After the first hop, when a user chooses the next item, the features in the baseline item and the next item are compared to determine how much they differ across multiple dimensions. This process can continue until the user's item discovery terminates.
Based on the single-hop search behavior, there can exist an implied pairing between two items for every observation in a historical item discovery log. When a user sets a baseline item, embodiments disclosed herein can operate to predict what his/her next move might be. In some embodiments, a method of predicting a user's item preference may comprise the following:
-
- For each paired observation with a baseline item i, determine what the next selected item, j, is expected to be. Next, examine all other observations to determine what the next selected item, t, was for all cases where the baseline item is i.
- For every case where the item selected after item i was t, compute a kernel K(dit) for a given radius.
- Determine a conditional probability that a user will select item j after i. This may reflect both structural, feature-based similarity weighted by the aggregate behavior, qij(h). In one embodiment, its converse may be dij=1−sij.
- For each item, t, given a baseline item i, it may be possible to predict the item selected and compare to the actual next item selected and assign a penalty for incorrect predictions. Assuming a baseline of item i, and a predicted next item selection of j, and actual next item selection of k, then the penalty may be:
Lj,k(i)=0 if j=k
Lj,k(i)=max(Rk−Rj,0) if j≠k
In this example, Rk may be the revenue that could be generated by selling one unit of item k.
-
- For every observation in the historical data set, the penalty value may be computed and the sum of the penalties for incorrect predictions, L, can be totaled.
- The prediction errors are driven by the weighted similarities, sij. Thus, changing the weights wp may change the value of L.
- In one embodiment, the Epanechnikov kernel may be used with a radius of 0.5. Beginning with an initial weight of wp=1/m selected for p=1, . . . , m features, various sets of weights may be used to determine the set of weights that minimizes the total penalty, L.
- M iterations can be run, each with a separate set of weights that represent a minor perturbation of the previous iteration's weights constrained so the condition that Σp=1mwp=1 holds.
Software implementing embodiments disclosed herein may be implemented in suitable computer-executable instructions that may reside on one or more non-transitory computer-readable storage media. Within this disclosure, the term “computer-readable medium” encompasses all types of data storage medium that can be read by at least one processor. Examples of a computer-readable medium can include random access memories, read-only memories, hard drives, data cartridges, magnetic tapes, floppy diskettes, flash memory drives, optical data storage devices, compact-disc read-only memories, and other appropriate computer memories and data storage devices.
Embodiments of a collaborative filter disclosed herein can provide many advantages. For example, while conventional collaborative filters may be utilized in an electronic market to make suggestions on similar consumer goods, they are ineffective in making suggestions on alternatives. Embodiments of a collaborative filter disclosed herein can be particularly useful in identifying consumer items that are more lightly to be bought by an individual user, predicting the user's item preference and increasing the likelihood of the user actually making a purchase.
These, and other, aspects of the disclosure will be better appreciated and understood when considered in conjunction with the following description and the accompanying drawings. It should be understood, however, that the following description, while indicating various embodiments of the disclosure and numerous specific details thereof, is given by way of illustration and not of limitation. Many substitutions, modifications, additions and/or rearrangements may be made within the scope of the disclosure without departing from the spirit thereof, and the disclosure includes all such substitutions, modifications, additions and/or rearrangements.
The drawings accompanying and forming part of this specification are included to depict certain aspects of the disclosure. It should be noted that the features illustrated in the drawings are not necessarily drawn to scale. A more complete understanding of the disclosure and the advantages thereof may be acquired by referring to the following description, taken in conjunction with the accompanying drawings in which like reference numbers indicate like features and wherein:
The disclosure and various features and advantageous details thereof are explained more fully with reference to the exemplary, and therefore non-limiting, embodiments illustrated in the accompanying drawings and detailed in the following description. Descriptions of known programming techniques, computer software, hardware, operating platforms and protocols may be omitted so as not to unnecessarily obscure the disclosure in detail. It should be understood, however, that the detailed description and the specific examples, while indicating the preferred embodiments, are given by way of illustration only and not by way of limitation. Various substitutions, modifications, additions and/or rearrangements within the spirit and/or scope of the underlying inventive concept will become apparent to those skilled in the art from this disclosure.
As used herein, the terms “comprises,” “comprising,” “includes,” “including,” “has,” “having,” or any other variation thereof, are intended to cover a non-exclusive inclusion. For example, a process, product, article, or apparatus that comprises a list of elements is not necessarily limited only those elements but may include other elements not expressly listed or inherent to such process, product, article, or apparatus. Further, unless expressly stated to the contrary, “or” refers to an inclusive or and not to an exclusive or. For example, a condition A or B is satisfied by any one of the following: A is true (or present) and B is false (or not present), A is false (or not present) and B is true (or present), and both A and B are true (or present).
Additionally, any examples or illustrations given herein are not to be regarded in any way as restrictions on, limits to, or express definitions of, any term or terms with which they are utilized. Instead these examples or illustrations are to be regarded as being described with respect to one particular embodiment and as illustrative only. Those of ordinary skill in the art will appreciate that any term or terms with which these examples or illustrations are utilized encompass other embodiments as well as implementations and adaptations thereof which may or may not be given therewith or elsewhere in the specification and all such embodiments are intended to be included within the scope of that term or terms. Language designating such non-limiting examples and illustrations includes, but is not limited to: “for example,” “for instance,” “e.g.,” “in one embodiment,” and the like.
Embodiments of the systems and methods disclosed herein may be better explained with reference to
Vehicle data system 120 may comprise one or more computer systems with central processing units executing instructions embodied on one or more computer readable media where the instructions are configured to perform at least some of the functionality associated with embodiments disclosed herein. These applications may include a vehicle data application 190 comprising one or more applications (instructions embodied on one or more non-transitory computer readable media) configured to implement an interface module 192, data gathering module 194 and processing module 196 utilized by the vehicle data system 120. Furthermore, vehicle data system 120 may include data store 122 operable to store obtained data 124, data 126 determined during operation, models 128 which may comprise a set of dealer cost model or price ratio models, or any other type of data associated with embodiments disclosed herein or determined during the implementation of those embodiments.
Vehicle data system 120 may provide a wide degree of functionality including utilizing one or more interfaces 192 configured to for example, receive and respond to queries from users at computing devices 110; interface with inventory companies 140, manufacturers 150, sales data companies 160, financial institutions 170, DMVs 180 or dealers 130 to obtain data; or provide data obtained, or determined, by vehicle data system 120 to any of inventory companies 140, manufacturers 150, sales data companies 160, financial institutions 182, DMVs 180, external data sources 184 or dealers 130. It will be understood that the particular interface 192 utilized in a given context may depend on the functionality being implemented by vehicle data system 120, the type of network 170 utilized to communicate with any particular entity, the type of data to be obtained or presented, the time interval at which data is obtained from the entities, the types of systems utilized at the various entities, etc. Thus, these interfaces may include, for example web pages, web services, a data entry or database application to which data can be entered or otherwise accessed by an operator, or almost any other type of interface which it is desired to utilize in a particular context.
In general, then, using these interfaces 192 vehicle data system 120 may obtain data from a variety of sources, including one or more of inventory companies 140, manufacturers 150, sales data companies 160, financial institutions 182, DMVs 180, external data sources 184 or dealers 130 and store such data in data store 122. This data may be then grouped, analyzed or otherwise processed by vehicle data system 120 to determine desired data 126 or models 128 which are also stored in data store 122. A user at computing device 110 may access the vehicle data system 120 through the provided interfaces 192 and specify certain parameters, such as a desired vehicle configuration or incentive data the user wishes to apply, if any. The vehicle data system 120 can select a particular set of data in the data store 122 based on the user specified parameters, process the set of data using processing module 196 and models 128, generate interfaces using interface module 192 using the selected data set and data determined from the processing, and present these interfaces to the user at the user's computing device 110. More specifically, in one embodiment interfaces 192 may visually present the selected data set to the user in a highly intuitive and useful manner.
In particular, in one embodiment, a visual interface may present at least a portion of the selected data set as a price curve, bar chart, histogram, etc. that reflects quantifiable prices or price ranges (e.g., “average,” “good,” “great,” “overpriced,” etc.) relative to reference pricing data points (e.g., invoice price, MSRP, dealer cost, market average, internet average, etc.). Using these types of visual presentations may enable a user to better understand the pricing data related to a specific vehicle configuration. Additionally, by presenting data corresponding to different vehicle configurations in a substantially identical manner, a user can easily make comparisons between pricing data associated with different vehicle configurations. To further aid the user's understanding of the presented data, the interface may also present data related to incentives which were utilized to determine the presented data or how such incentives were applied to determine presented data.
Turning to the various other entities in topology 100, dealer 130 may be a retail outlet for vehicles manufactured by one or more of OEMs 150. To track or otherwise manage sales, finance, parts, service, inventory and back office administration needs dealers 130 may employ a dealer management system (DMS) 132. Since many DMS 132 are Active Server Pages (ASP) based, transaction data 134 may be obtained directly from the DMS 132 with a “key” (for example, an ID and Password with set permissions within the DMS system 132) that enables data to be retrieved from the DMS system 132. Many dealers 130 may also have one or more web sites which may be accessed over network 170, where pricing data pertinent to the dealer 130 may be presented on those web sites, including any pre-determined, or upfront, pricing. This price is typically the “no haggle” price (i.e., price with no negotiation) and may be deemed a “fair” price by vehicle data system 120.
Inventory companies 140 may be one or more inventory polling companies, inventory management companies or listing aggregators which may obtain and store inventory data from one or more of dealers 130 (for example, obtaining such data from DMS 132). Inventory polling companies are typically commissioned by the dealer to pull data from a DMS 132 and format the data for use on websites and by other systems. Inventory management companies manually upload inventory information (photos, description, specifications) on behalf of the dealer. Listing aggregators get their data by “scraping” or “spidering” websites that display inventory content and receiving direct feeds from listing websites (for example, AutoTrader.com, FordVehicles.com, etc.).
DMVs 180 may collectively include any type of government entity to which a user provides data related to a vehicle. For example, when a user purchases a vehicle it must be registered with the state (for example, DMV, Secretary of State, etc.) for tax and titling purposes. This data typically includes vehicle attributes (for example, model year, make, model, mileage, etc.) and sales transaction prices for tax purposes.
Financial institution 182 may be any entity such as a bank, savings and loan, credit union, etc. that provides any type of financial services to a participant involved in the purchase of a vehicle. For example, when a buyer purchases a vehicle they may utilize a loan from a financial institution, where the loan process usually requires two steps: applying for the loan and contracting the loan. These two steps may utilize vehicle and consumer information in order for the financial institution to properly assess and understand the risk profile of the loan. Typically, both the loan application and loan agreement include proposed and actual sales prices of the vehicle.
Sales data companies 160 may include any entities that collect any type of vehicle sales data. For example, syndicated sales data companies aggregate new and used sales transaction data from the DMS 132 systems of particular dealers 130. These companies may have formal agreements with dealers 130 that enable them to retrieve data from the dealer 130 in order to syndicate the collected data for the purposes of internal analysis or external purchase of the data by other data companies, dealers, and OEMs.
Manufacturers 150 are those entities which actually build the vehicles sold by dealers 130. In order to guide the pricing of their vehicles, the manufacturers 150 may provide an Invoice price and a Manufacturer's Suggested Retail Price (MSRP) for both vehicles and options for those vehicles— to be used as general guidelines for the dealer's cost and price. These fixed prices are set by the manufacturer and may vary slightly by geographic region.
External information sources 184 may comprise any number of other various source, online or otherwise, which may provide other types of desired data, for example data regarding vehicles, pricing, demographics, economic conditions, markets, locale(s), consumers, etc.
It should be noted here that not all of the various entities depicted in topology 100 are necessary, or even desired, in embodiments disclosed herein, and that certain of the functionality described with respect to the entities depicted in topology 100 may be combined into a single entity or eliminated altogether. Additionally, in some embodiments other data sources not shown in topology 100 may be utilized. Topology 100 is therefore exemplary only and should in no way be taken as imposing any limitations on embodiments disclosed herein.
To identify consumer items more likely to be bought by an individual user, a filter may be used to rank items based on the degree to which they match user preferences. The filter may be hierarchical and may take various factors into account. As shown in
In one embodiment, the similarity among items 200 may be based on observable features. Example observable features may include, but are not limited to, price, color, size, etc. A summary of aggregate online search behavior 205 can be across multiple users. Individual search behavior 210 may include the item features determined to be most important to the individual user. A baseline item 215 may include an item against which a conditional probability of another item being selected is measured.
One objective of the filter may be to determine the probability that a user will select item j given that he/she expressed interest in baseline item i:
P(j|i)=f(similarity among items, aggregate search behavior among items, individual search behavior).
Once the probabilities are determined for all pairs of items, they may be ranked in decreasing order. When a user selects item i during their item discovery session, the items with the highest ranking conditional probabilities may be suggested to the user as “you may also like these items”. A filter that accurately predicts historical user preferences, without the “you may also like . . . ” suggestions, can be leveraged to present items matching a user's preference during new item discovery sessions. It may also be used to present items with features that are both appealing to the user and may yield higher expected revenue for the seller.
Many existing collaborative filters exist to serve a marketplace where the consumer item is relatively low-priced, consumed frequently, bundled with other items, and/or there is little measurable similarity between items. Examples of consumer items that may be relatively low-price may include, but are not limited to, packaged groceries. Examples of consumer items that may be consumed frequently may include, but are not limited to, movie rentals. Examples of consumer items that may be bundled with other items may include, but are not limited to, a razor and replacement blades for the razor. Examples of consumer items that may have little measurable similarity between them may include, but are not limited to, books.
The filter presented herein differs from conventional collaborative filters in that it may be applied to consumer goods that are highly priced, bought infrequently as a standalone item and where structural differences are both observable and measurable. One example could be new or used automobiles; they are bought relatively infrequently (every few years or decades), highly priced (ranging in the tens of thousands of dollars), not bought in bulk, and the similarity across vehicles may be characterized by comparison of a relatively small set of features (year, make, model, body type, cylinders, fuel type, etc.). The new filter disclosed herein is not intended to answer the question: “Given a consumer bought an item, what else may he/she want to buy?”. Rather, it may answer the question “Given a consumer is going to buy one-and-only-one item of a certain type, what candidate items should be suggested?”. Thus, the focus can be on the decisions surrounding “item A OR item B” rather than “item A AND item B.”
In some embodiments, the filter adopts the following notation:
An item (xi) can be described by its p=1, . . . , m features which may also be known as characteristics or variables.
xi={xi,1,xi,2, . . . ,xi,m}
All n distinct items may be represented in matrix form as
Although the format of the data for some options may not be numeric, similarity can still be established across features by first transforming the data to a numeric scale.
Binary Features: In one embodiment, when a feature may assume only two possible states (e.g., yes/no, on/off, black/white, etc.), it can be fairly simple to map this feature onto a numeric scale by setting one state to 1 and the other to 0. For example, the following rule could be applied to transform a feature represented as “yes”/“no” onto a numeric scale.
-
- if xi,p=“yes” then xi,p=1
- if xi,p=“no” then xi,p=0.
The choice of which state gets assigned the value of 1 may not be important as it does not affect the similarity computations in the filter.
Ordinal Features: In one embodiment, when the values of a feature take on a non-numeric format with an implied order (e.g., “Low”/“Medium”/“High”, “Poor”/“Fair”/“Good”, etc.), a simple transformation may represent such features using their ranks. For example, the following rule could be applied to transform a feature represented as “low”/“medium”/“high” onto a numeric scale:
-
- if xi,p=“low” then xi,p=1
- if xi,p=“medium” then xi,p=2
- if xi,p=“high” then xi,p=3.
More complex transformations may be applied if information exists supporting the need to non-uniformly space the various states.
Categorical Features: In one embodiment, when the values of a feature take on a non-numeric format without an implied order (e.g., “Red”/“White”/“Green”), similarity for that feature across observations can be established. In one embodiment, this data can be left as-is until a later stage of the filter which is explained below. If there are only two states that the feature may assume, one could also just consider the feature to be binary.
For example, suppose the filter were to be applied to an automobile purchase in the “midsize car” category for which there were three vehicle types, each having four features {price, fuel efficiency, turbo, color} as shown in Table 1.
In this example, the transformation to the numeric scale (except the categorical features) could yield the results shown in Table 2.
All features that have been transformed to a numeric format can then be represented on scale bounded over [0,1] as follows:
The process of rescaling features onto a common scale is called “standardization.” For example, suppose the filter is applied to an automobile purchase in the “midsize car” category and the least expensive car in the midsize car category was $18,000 and the most expensive car in the midsize car category was $28,000. The scaled values of the price feature for the least expensive, most expensive, and a car in the category costing $26,000 could be:
Least Expensive Car: xi,p=(18,000−18,000)/(28,000−18,000)=0/10,000=0.0
Most Expensive Car: xi,p=(28,000−18,000)/(28,000−18,000)=10,000/10,000=1.0
$26,000 Car: xi,p=(26,000−18,000)/(28,000−18,000)=8,000/10,000=0.8.
The standardized numeric representation of the features in this example is shown in Table 3.
In some embodiments, a method of identifying consumer items more likely to be purchased by an individual user may comprise determining a similarity among items based on observable features. In one embodiment, a method of determining a similarity among items based on observable features may comprise a) computing individual feature difference between a first observation and a second observation; b) computing a composite similarity between the first observation and the second observation; and c) repeating steps a) and b) for all possible values of the first observation and the second observation.
In one embodiment, the similarity s between item i and item j based on a comparison of m observable features, sij, can be computed using the well-known Minkowski metric:
Sij=1−[Σp=1mwp|xi,p−xj,p|λ]1/λ
where λ≥0, 0≤sij≤1, and Σp=1m wp=1. If the features are categorical (e.g., red/white/green), then |xi,p−xj,p| if the categories match, 1 otherwise.
Referring to
As an example, to determine the similarity between vehicle 1 and vehicle 2 (λ=2), let wp=0.5 for the price feature (p=1), 0.2 for the fuel efficiency feature (p=2), 0.1 for the turbo feature (p=3), and 0.3 for the color feature (p=4) and so Σp=1m wp=1. The similarity between vehicle 1 and vehicle 2 may be determined using one embodiment of method 300 as follows:
Step 302: Compute individual feature difference between observations i and j:
Price: |xi,1−xj,1|=|x1,price−x2,price|=|0.8−0.0|=0.8
Fuel Efficiency: |xi,2−xj,2|=|x1,fuel,eff−x2,fuel,eff|=|0.5−0.0|=0.5
Turbo: |xi,3−xj,3|=|x1,turbo−x2,turbo|=|1.0−0.0|=1.0
Color: |xi,4−xj,4|=|x1,color−x2,color|=(blue≠red)=1.0
Step 304: Compute a composite similarity between observations i and j:
sij=1−[Σp=1mwp|xi,p−xj,p|λ]1/λ=1−[0.5×(0.8)2+0.2×(0.5)2+0.1×(1.0)2+0.3×(1.0)2]1/2=1−(0.77)1/2=0.051
Step 306: Repeat the above steps for all possible values of i and j. In this example, there is not a need to compute the values of dii since by definition the similarity between an observation and itself is 1.
In some embodiments, a method of identifying consumer items more likely to be purchased by an individual user may further comprise generating a summary of aggregate online search behavior across multiple users, qij(h). Online search behavior may be aggregated across multiple users in various ways.
When a user establishes an item discovery session, he/she may begin with a baseline item, i. In one embodiment, unless the session terminates without exploration of other items, a usage log containing the items viewed and the order in which they were viewed can be updated. If the log contains the usage history for many users, an aggregation of item view frequencies may become available and can be used to measure aggregate similarity across items. Unlike the feature-based similarity metric, sij, this metric may reflect historical search behavior. In some embodiments, from the log, the frequency of cases where users viewed item j after they viewed item i, nij, may be computed in a square matrix. Table 4 provides an example where there exists only 3 items: i, j, k.
When item discovery is limited to allow only one item to be viewed at a time, item view histories may be constrained. For example, if a user selects item i as a baseline and is interested in both items j and k, he/she may choose one or the other to view first. If the user wished to view all three items, and viewed j before k, there two possible sequences:
-
- i→j, i→k (after viewing j, the user went back to i then chose k)
- i→j→k (after viewing j, the user chose k)
This may lead to different embodiments of this component of the filter.
In one embodiment, frequencies may be collected only for each “hop” in the search path, and so the aggregate frequency may be qij(1)=nij. In the example provided, the first sequence of item discovery could increase the total value of nij and nik by 1 and njk by 0. The second sequence of item discovery could increase the total value of nij and njk by 1 and nik by 0.
In another embodiment, frequencies may be collected only for all h “hops” in the search path, and so the aggregate frequency could be
for all items r≠i. In the example provided, the first sequence of item discovery may increase the total value of nij and nik by 1 and njk by 0. The second sequence of item discovery, where h=2, may increase the total value of nij and njk by 1 and nik by ⅔.
In yet another embodiment, frequencies may be collected only for all pairs of items in the search path, regardless of the order in which they were searched. This may be appealing when the user's item search histories are known, but the order in which they searched them was not. In the example provided, both the first and second sequences of item discovery may increase the total values of nij, nik and njk by 1.
In some embodiments, a method of identifying consumer items more likely to be purchased by an individual user may further comprise determining item features that are most important to an individual user, uij(h). As discussed above, for any given item, there can be a multitude of observable and measurable features. While the first two components of the item similarity may reflect the aggregate user discovery behavior and structural differences among items, it is possible that a new user may exhibit item discovery behavior that is distinctly different from others. Initially, when a new user selects his or her baseline item i, there may be no information that will distinguish him/her from other users. However, once the next item is selected, additional information may become available that can be leveraged to provide more precise suggestions on other items that a user may like.
In the computation of the structural similarity among items based on features, the computation of sij looked at the weighted difference in features and the weight assigned to feature p was denoted by wp. The user-level component of the filter reflects adjustment to sij based on the features determined, through the new user's item discovery behavior, to be most important: uij(h)=sij+δij(h). This component may be computed in the following steps:
Step 1: When a user establishes his/her baseline item, i, the weights assigned to each feature, wp(h)=wp(0) may be the same as those used to compute sij. How the initial values of the various wp's are heuristically determined are explained later in this application.
Step 2: After the first hop, when a user chooses the next item, j, the features in items i and j are compared to see how much they differ across the m dimensions. For dimension p, the difference may be dij(p)=|xi,p−xj,p| and so the weight wp(h) may be adjusted to be w*p(h)=[1−|dij(p)|]×wp(h−1). Once this is done for all features, p, the weights are rescaled so they sum to 1:
Step 3: After the user selects hop h, his/her user-level structural similarity may be computed as:
uij(h)=1−[Σp=1mw**p(h)|xi,p−xj,p|λ]1/λ
In the earlier example, the original values of wp(0) for the price, fuel efficiency, turbo and color features were w={0.5, 0.2, 0.1, 0.3} respectively. If a particular user started his/her search with vehicle 1 and then went to vehicle 3 at hop h=1, then the revised weights before rescaling to sum to 1 may be:
Price (p=1): w*1(1)=[1−|dij(1)|]×w1(0)=[1−|0.2|]×0.5=0.4
Fuel Efficiency (p=2): w*2(1)=[1−|dij(2)|]×w2(0)=[1−|0.5|]×0.2=0.10
Turbo (p=3): w*3(1)=[1−|dij(3)|]×w3(0)=[1−|0|]×0.1=0.1
Color (p=4): w*4(1)=[1−|dij(4)|]×w4(0)=[1−|0|]×0.3=0.3
After standardizing so that the condition that the weights sum to 1, the revised feature weights may be: {0.4, 0.1, 0.1, 0.3}/(0.4+0.1+0.1+0.30)={0.44,0.11,0.11,0.34}.
Determination of the feature weights used to compute sij and uij(0) may be computed heuristically. The values of wp can reflect the relative importance of each feature in separating one item from another. Higher weights (closer to 1) may indicate the feature is more important in distinguishing items, and smaller weights (closer to 0) are less important. In this stage of the filter, we may heuristically determine the optimal weights.
For example, if we define dij=1−sij as the dissimilarity between two items, then a kernel K(dij), can be defined for a given radius, 0≤r≤1. Some examples of possible kernels are shown in
For every observation in the historical item discovery log, there can exist an implied pairing between two items, i and j, based on the single-hop search behavior. When a user sets a baseline item of i, one can predict what his/her next move will be: item j, item k, or some other item t. This prediction may be done as follows:
(A) For each paired observation with a baseline item i, determine what the next selected item, j, is expected to be. Next, examine all other observations to determine what the next selected item, t, was for all cases where the base item is i.
(B) For every case where the item selected after item i was t, compute the kernel K(dit)
(C) The conditional probability that a user will select item j after i may then be:
-
- This may reflect both structural, feature-based similarity weighted by the aggregate behavior, qij(h). In one embodiment, its converse may be dij=1−sij.
(D) For each item, t, selected given a baseline item i, it may be possible to predict the item selected and compare to the actual next item selected and assign a penalty for incorrect predictions. Assuming a baseline of item i, and a predicted next item selection of j, and actual next item selection of k, then the penalty may be:
Lj,k=0 if j=k
Lj,k(i)=max(Rk−Rj,0) if j≠k
In this example, Rk may be the revenue that could be generated by selling one unit of item k.
(E) For every observation in the historical data set, the penalty value may be computed and the sum of the penalties for incorrect predictions, L, can be totaled.
(F) Since the prediction errors are driven by the weighted similarities, sij, changing the weights wp may change the value of L.
(G) In one embodiment, the Epanechnikov kernel may be used with a radius of 0.5. Beginning with an initial weight of wp=1/m selected for p=1, . . . ,m features, various sets of weights may be used to determine the set of weights that minimizes the total penalty, L.
(H) M iterations can be run, each with a separate set of weights that represent a minor perturbation of the previous iteration's weights constrained so the condition that Σp=1m wp=1 holds.
For example, configurator data 505 may contain features for each of the vehicle trims: year, make, model, body type, #cylinders, aspiration, #doors, gross vehicle weight, miles-per-gallon (highway and city), drive (4WD, AWD, 2WD), fuel type, and displacement. The unique key used for matching in this database may be the ‘trim_id’ at step 510.
Current pricing data 575 may be organized by ZIP Code and may include Base MSRP, configured MSRP, dealer invoice, dealer cost, price (unadjusted for incentives), etc. The pricing data may be an output of a statistical model that generates prices for each trim and ZIP code based on historical transactions data. The unique key used for matching in this database may be the ‘trim_id’ at step 510. A secondary key use for matching with the incentives data may be the ‘ZIP Code.’
Incentives data 515 may include dealer and customer cash incentives by date, ZIP Code and vehicle trim. If an incentive is in place at the time of search for the vehicle trim and ZIP Code specified by the user, the incentives can be added to the pricing data and the price may be adjusted by the amount of the incentive based on match of ‘trim_id’ and ‘ZIP Code’ at step 520.
Historical transactions logs 530 may include unique visitor ID, time of visit, date of visit, ZIP Code, vehicle trims searched and order in which vehicle trims were searched.
Live visitor logs may be in place for those currently on the base website. Upon start of a session, the unique visitor ID, time of visit, date of visit, ZIP Code may be stored. At each subsequent hop, the vehicle trim being viewed can be stored in the order of visit. For each hop, the suggested vehicles selected by the algorithm to be most similar to the current view may be displayed and the trim-level ID is saved.
Attention may now be directed to additional example embodiments for usage of the hierarchical filter.
Example Embodiment 1: A generalized filter can be based on a combination of weighted feature similarities and historical aggregate user behavior using only one-hop behavior. This filter's weights may be optimized so that the expected loss in revenue due to inaccurate predictions is minimized. When a user visits the item discovery tool, the conditional probability of choosing item j after item i may be based on the first two components of the filter: the weighted feature-level similarity and the aggregate user behavior (using only the 1-hop data) data. The conditional probability of all other items, t, being selected given a baseline of item may be computed as: P(j|i)=(1−α)g1(t|i)+(α) uit(1) where α=|δij(1)|=|sij−uij(1)| and may put emphasis on the user-level behavior proportional to the degree to which the user's behavior differs from the aggregate. After each selection (including the first) of an item, the conditional probabilities for all other items may be ranked and the top S (based on web page real estate available) can be displayed. Comparison of items is unrestricted. Every item pair i, j in the universe of items may be used in the computations of conditional probability.
Example Embodiment 2: A generalized filter can be based on a combination of weighted feature similarities and historical aggregate user behavior and may take into account all hops initiated by a user. This filter's weights may be optimized so that the expected loss in revenue due to inaccurate predictions is minimized. When a user visits the item discovery tool, the conditional probability of choosing item j after item i are based on the first two components of the filter: the weighted feature-level similarity and the aggregate user behavior (using all of the h-hop data) data. The conditional probability of all other items, t, being selected given a baseline of item may be computed as: P(j|i)=(1−α)gh(t|i)+(α) uit(h) where α=|sij−uij(1)| and may put emphasis on the user-level behavior proportional to the degree to which the user's behavior differs from the aggregate. After each selection (including the first) of an item, the conditional probabilities for all other items may be ranked and the top S (based on web page real estate available) can be displayed. Comparison of items may be unrestricted. Every item pair i,j in the universe of items may be used in the computations of conditional probability.
Example Embodiment 3: A generalized filter can be based on a combination of weighted feature similarities and historical aggregate user behavior and takes into account all hops initiated by a user. This filter does not distinguish the order in which hops were executed. The filter's weights may be optimized so that the expected loss in revenue due to inaccurate predictions is minimized. When a user visits the item discovery tool, the conditional probability of choosing item j after item i may be based on the first two components of the filter: the weighted feature-level similarity and the aggregate user behavior (using all of the h-hop data) data. The conditional probability of all other items, t, being selected given a baseline of item may be computed as: P(j|i)=(1−α)gh(t|i)+(α) uit(h) where α=sij−uij(1)| and may put emphasis on the user-level behavior proportional to the degree to which the user's behavior differs from the aggregate. After each selection (including the first) of an item, the conditional probabilities for all other items can be ranked and the top S (based on web page real estate available) can be displayed. Comparison of items may be unrestricted. Every item pair i,j in the universe of items may be used in the computations of conditional probability.
Example Embodiment 4: This example embodiment is similar to Example Embodiment 1, with three exceptions. First, comparison of items may be restricted so that only item pairs i,j whose retail price is within 20% of each other are compared. Historical data used to compute aggregate level behavior may be geographically restricted. Rankings and display of “you may also like” may be restricted to only those items in the price band and are reflective historical data from the geographic area in which the new user is located.
Example Embodiment 5: This example embodiment is similar to Example Embodiment 2, with three exceptions. First, comparison of items can be restricted so that only item pairs i,j whose retail price is within 20% of each other are compared. Historical data used to compute aggregate level behavior may be geographically restricted. Rankings and display of “you may also like” may be restricted to only those items in the price band and are reflective historical data from the geographic area in which the new user is located.
Example Embodiment 6: This example embodiment is similar to Example Embodiment 3, with three exceptions. First, comparison of items may be restricted so that only item pairs i,j whose retail price is within 20% of each other are compared. Historical data used to compute aggregate level behavior may also be geographically restricted. Finally, rankings and display of “you may also like” can be restricted to only those items in the price band and are reflective historical data from the geographic area in which the new user is located.
Although this disclosure has been described with respect to specific embodiments, these embodiments are merely illustrative, and not restrictive of the invention disclosed herein. The description herein of illustrated embodiments of the invention, including the description in the Abstract and Summary, is not intended to be exhaustive or to limit the invention to the precise forms disclosed herein (and in particular, the inclusion of any particular embodiment, feature or function within the Abstract or Summary is not intended to limit the scope of the invention to such embodiment, feature or function). Rather, the description is intended to describe illustrative embodiments, features and functions in order to provide a person of ordinary skill in the art context to understand the invention without limiting the invention to any particularly described embodiment, feature or function, including any such embodiment feature or function described in the Abstract or Summary. While specific embodiments of, and examples for, the invention are described herein for illustrative purposes only, various equivalent modifications are possible within the spirit and scope of the invention, as those skilled in the relevant art will recognize and appreciate. As indicated, these modifications may be made to the invention in light of the foregoing description of illustrated embodiments of the invention and are to be included within the spirit and scope of the invention. Thus, while the invention has been described herein with reference to particular embodiments thereof, a latitude of modification, various changes and substitutions are intended in the foregoing disclosures, and it will be appreciated that in some instances some features of embodiments of the invention will be employed without a corresponding use of other features without departing from the scope and spirit of the invention as set forth. Therefore, many modifications may be made to adapt a particular situation or material to the essential scope and spirit of the invention.
Reference throughout this specification to “one embodiment”, “an embodiment”, or “a specific embodiment” or similar terminology means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment and may not necessarily be present in all embodiments. Thus, respective appearances of the phrases “in one embodiment”, “in an embodiment”, or “in a specific embodiment” or similar terminology in various places throughout this specification are not necessarily referring to the same embodiment. Furthermore, the particular features, structures, or characteristics of any particular embodiment may be combined in any suitable manner with one or more other embodiments. It is to be understood that other variations and modifications of the embodiments described and illustrated herein are possible in light of the teachings herein and are to be considered as part of the spirit and scope of the invention.
In the description herein, numerous specific details are provided, such as examples of components and/or methods, to provide a thorough understanding of embodiments of the invention. One skilled in the relevant art will recognize, however, that an embodiment may be able to be practiced without one or more of the specific details, or with other apparatus, systems, assemblies, methods, components, materials, parts, and/or the like. In other instances, well-known structures, components, systems, materials, or operations are not specifically shown or described in detail to avoid obscuring aspects of embodiments of the invention. While the invention may be illustrated by using a particular embodiment, this is not and does not limit the invention to any particular embodiment and a person of ordinary skill in the art will recognize that additional embodiments are readily understandable and are a part of this invention.
Any suitable programming language can be used to implement the routines, methods or programs of embodiments of the invention described herein, including C, C++, Java, assembly language, etc. Different programming techniques can be employed such as procedural or object oriented. Any particular routine can execute on a single computer processing device or multiple computer processing devices, a single computer processor or multiple computer processors. Data may be stored in a single storage medium or distributed through multiple storage mediums, and may reside in a single database or multiple databases (or other data storage techniques). Although the steps, operations, or computations may be presented in a specific order, this order may be changed in different embodiments. In some embodiments, to the extent multiple steps are shown as sequential in this specification, some combination of such steps in alternative embodiments may be performed at the same time. The sequence of operations described herein can be interrupted, suspended, or otherwise controlled by another process, such as an operating system, kernel, etc. The routines can operate in an operating system environment or as stand-alone routines. Functions, routines, methods, steps and operations described herein can be performed in hardware, software, firmware or any combination thereof.
Embodiments described herein can be implemented in the form of control logic in software or hardware or a combination of both. The control logic may be stored in an information storage medium, such as a computer-readable medium, as a plurality of instructions adapted to direct an information processing device to perform a set of steps disclosed in the various embodiments. Based on the disclosure and teachings provided herein, a person of ordinary skill in the art will appreciate other ways and/or methods to implement the invention.
It is also within the spirit and scope of the invention to implement in software programming or code an of the steps, operations, methods, routines or portions thereof described herein, where such software programming or code can be stored in a computer-readable medium and can be operated on by a processor to permit a computer to perform any of the steps, operations, methods, routines or portions thereof described herein. The invention may be implemented by using software programming or code in one or more general purpose digital computers, by using application specific integrated circuits, programmable logic devices, field programmable gate arrays, optical, chemical, biological, quantum or nanoengineered systems, components and mechanisms may be used. In general, the functions of the invention can be achieved by any means as is known in the art. For example, distributed or networked systems, components and circuits can be used. In another example, communication or transfer (or otherwise moving from one place to another) of data may be wired, wireless, or by any other means.
A “computer-readable medium” may be any medium that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, system or device. The computer readable medium can be, by way of example only but not by limitation, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, system, device, propagation medium, or computer memory. Such computer-readable medium shall generally be machine readable and include software programming or code that can be human readable (e.g., source code) or machine readable (e.g., object code).
A “processor” includes any, hardware system, mechanism or component that processes data, signals or other information. A processor can include a system with a general-purpose central processing unit, multiple processing units, dedicated circuitry for achieving functionality, or other systems. Processing need not be limited to a geographic location, or have temporal limitations. For example, a processor can perform its functions in “real-time,” “offline,” in a “batch mode,” etc. Portions of processing can be performed at different times and at different locations, by different (or the same) processing systems.
It will also be appreciated that one or more of the elements depicted in the drawings/figures can also be implemented in a more separated or integrated manner, or even removed or rendered as inoperable in certain cases, as is useful in accordance with a particular application. Additionally, any signal arrows in the drawings/Figures should be considered only as exemplary, and not limiting, unless otherwise specifically noted.
Furthermore, the term “or” as used herein is generally intended to mean “and/or” unless otherwise indicated. As used herein, including the claims that follow, a term preceded by “a” or “an” (and “the” when antecedent basis is “a” or “an”) includes both singular and plural of such term, unless clearly indicated within the claim otherwise (i.e., that the reference “a” or “an” clearly indicates only the singular or only the plural). Also, as used in the description herein and throughout the claims that follow, the meaning of “in” includes “in” and “on” unless the context clearly dictates otherwise. The scope of the present disclosure should be determined by the following claims and their legal equivalents.
Claims
1. A method for identifying consumer items more likely to be selected by a user as a next hop in a search path, the search path comprising a sequence of items which are viewed by the user, the method comprising:
- in a computer having a processor and a memory, wherein the computer is communicatively connected to a computing device;
- generating an interface which is presented to the user via the computing device;
- maintaining probability metrics corresponding to each pair of items in a plurality of items, the probability metrics representing historical online search behavior of a plurality of users, the probability metrics indicating for each the pair of items a corresponding similarity of features across multiple dimensions, each dimension corresponding to a particular feature;
- assigning a weight to each of the features;
- determining, responsive to the user viewing a baseline item via the interface on the computing device, for each individual item of the plurality of items, a probability that the user will select the individual item as the next hop in the search path, wherein the probability is determined based on the probability metrics corresponding to the pair of items including the baseline item and the individual item;
- updating the interface running on the computing device, wherein the updated interface presents to the user at least a portion of the plurality of items ranked according to the probabilities; and
- in response to the user selecting one of the presented portion of the plurality of items, updating one or more of the weights assigned to the features based on feature similarities between the baseline item and the selected one of the presented portion of the plurality of items.
2. The method according to claim 1 further comprising in response to the user selecting the one of the presented portion of the plurality of items, ranking other ones of the portion of the plurality of items and updating the interface running on the computing device to present to the user the other ones of the portion of the plurality of items ranked in decreasing order according to the probabilities.
3. The method according to claim 1, further comprising in response to establishing the baseline item, assigning the weight for each feature based on a determined importance of the feature to the user.
4. The method according to claim 1 further comprising:
- for each paired observation associated with the baseline item, determining a first next item expected to be selected by the user; and
- examining all observations associated with the baseline item to determine a second next item expected to be selected by the user.
5. The method according to claim 4 further comprising:
- defining a kernel for a given radius each time the second next item is selected after the baseline item is established by the user.
6. The method according to claim 4 further comprising:
- determining a conditional probability that the user will select a particular first next item after a particular baseline item is established by the user.
7. The method according to claim 4 further comprising: assigning a penalty value to the prediction if the prediction is incorrect.
- making a prediction as to which next item the user will select;
- comparing the prediction with an actual next item selected by the user; and
8. The method according to claim 7, wherein the prediction is one of a plurality of predictions, further comprising:
- determining a total penalty value for all incorrect predictions in the plurality of predictions; and
- determining a set of weights that minimizes the total penalty.
9. A computer program product comprising at least one non-transitory computer readable medium storing instructions translatable by at least one processor in a computer having a memory, wherein the computer is communicatively connected to a computing device, wherein the instructions are translatable by the at least one processor to perform:
- generating an interface which is presented to a user via the computing device;
- maintaining probability metrics corresponding to each pair of items in a plurality of items, the probability metrics representing historical online search behavior of a plurality of users, the probability metrics indicating for the pair of items a corresponding similarity of features across multiple dimensions, each dimension corresponding to a particular feature;
- assigning a weight to each of the features;
- determining, responsive to the user viewing a baseline item via the interface on the computing device, for each individual item of the plurality of items, a probability that the user will select the individual item, wherein the probability is determined based on the probability metrics corresponding to the pair of items including the baseline item and the individual item;
- updating the interface running on the computing device, wherein the updated interface presents to the user at least a portion of the plurality of items ranked according to the probabilities; and
- in response to the user selecting one of the presented portion of the plurality of items, updating one or more of the weights assigned to the features based on feature similarities between the baseline item and the selected one of the presented portion of the plurality of items.
10. The computer program product of claim 9, wherein the at least one non-transitory computer readable medium further stores instructions translatable by the at least one processor to perform:
- in response to the user selecting the one of the presented portion of the plurality of items, ranking other ones of the portion of the plurality of items and updating the interface running on the computing device to present to the user the other ones of the portion of the plurality of items ranked in decreasing order according to the probabilities.
11. The computer program product of claim 9, wherein the at least one non-transitory computer readable medium further stores instructions translatable by the at least one processor to perform:
- in response to establishing the baseline item, assigning the weight for each feature based on a determined importance of the feature to the user.
12. The computer program product of claim 9, wherein the at least one non-transitory computer readable medium further stores instructions translatable by the at least one processor to perform:
- for each paired observation associated with the baseline item, determining a first next item expected to be selected by the user; and
- examining all observations associated with the baseline item to determine a second next item expected to be selected by the user.
13. The computer program product of claim 12, wherein the at least one non-transitory computer readable medium further stores instructions translatable by the at least one processor to perform:
- defining a kernel for a given radius each time the second next item is selected after the baseline item is established by the user.
14. The computer program product of claim 12, wherein the at least one non-transitory computer readable medium further stores instructions translatable by the at least one processor to perform:
- determining a conditional probability that the user will select a particular first next item after a particular baseline item is established by the user.
15. The computer program product of claim 12, wherein the at least one non-transitory computer readable medium further stores instructions translatable by the at least one processor to perform:
- making a prediction as to which next item the user will select;
- comparing the prediction with an actual next item selected by the user; and
- assigning a penalty value to the prediction if the prediction is incorrect.
16. The computer program product of claim 15, wherein the prediction is one of a plurality of predictions and wherein the at least one non-transitory computer readable medium further stores instructions translatable by the at least one processor to perform:
- determining a total penalty value for all incorrect predictions in the plurality of predictions; and
- determining a set of weights that minimizes the total penalty.
6636836 | October 21, 2003 | Pyo |
20060010029 | January 12, 2006 | Gross |
20080243632 | October 2, 2008 | Kane |
20080250312 | October 9, 2008 | Curtis |
20090282019 | November 12, 2009 | Galitsky |
20100250336 | September 30, 2010 | Selinger |
20110103482 | May 5, 2011 | Lee |
20120143718 | June 7, 2012 | Graham |
- Office Action issued for U.S. Appl. No. 14/736,932, dated Aug. 12, 2022, 10 pages.
- Examiner's Answer issued for U.S. Appl. No. 15/248,213, dated Sep. 22, 2022, 6 pages.
- Office Action issued for U.S. Appl. No. 14/736,932, dated Jun. 1, 2023, 13 pages.
Type: Grant
Filed: May 10, 2022
Date of Patent: Mar 5, 2024
Patent Publication Number: 20220270119
Assignee: TrueCar, Inc. (Santa Monica, CA)
Inventors: Thomas J. Sullivan (Santa Monica, CA), Michael D. Swinson (Santa Monica, CA)
Primary Examiner: S. Sough
Assistant Examiner: Cuong V Luu
Application Number: 17/741,229
International Classification: G06Q 30/0201 (20230101); G06F 3/0482 (20130101); G06F 3/04842 (20220101); G06F 16/2457 (20190101); G06F 16/9535 (20190101); G06N 5/04 (20230101); G06Q 30/0202 (20230101);