This is relevant to 2019, and the performance enhancements we’ve seen in relation to databases. 10 years ago… much different data models due primarily to the way indexing/searches taxed servers. If that already went over your head, then this is going to be a rough post to read through.
Most people are familiar with spreadsheets (Excel, Sheets) and that they allow an X/Y relationship (2 dimensions). You can have fruit categories, then people, and figure out how many fruit each person has. Advanced users are familiar with the concept of pivot tables, where you can further filter that X/Y data on additional criteria (Z or 3 dimensions).
Going beyond 3 levels of relationships, you need to get into databases (DB). Really good DB engineers make a crap top of money, because it is both a thankless job and an extremely complicated one. Back in my coding days, a simple calendar booking application I wrote for tanning salons had about 30 different “sheets” that were all interlinked. Making any change to those sheets had to be meticulously planned so that it didn’t cause any bugs.
This is where data modelling comes into play. You plan out your data markers, their main identifiers, and their attributes. As you progress, you realize that you need more an more attributes. That then becomes a table of potential options. Let’s try a simple example, describing an apple.
The type, size, and color would be a good start. Then you realize you want to track if it has seeds, the general shape, the time of harvest, the average price, and whole bunch of variables. What you end up with is a table that is an index of those variables, and then a single table per variable type. Could be a database with 50 tables by the end.
And that’s a simple example, since the relationships between the table are INDEX –> VARIABLE. Complex tables have interdependence between variables, and that’s a rabbit hole that can have no end.
Long story short – you need an extremely robust data model before you start, and anytime you make a change, it needs to be really thought out.
I am going to use a loot based mechanic with randomized stats as an example, as it’s relevant to yesterday’s post. This will use Anthem specifically as an example.
The logic of a loot drop follows:
- Does an item drop
- What rarity
- Whom is it dropping for
- What class
- What type
- What sub-type
- What are the inscriptions
Each one of those questions has an associated algorithm.
- Rarity: This is a factor of the Luck stat, combined with the enemy type. Boss characters have higher odds of dropping better items. Legendary contracts & strongholds guarantee a MW level item at the end.
- Class: There is a large weight associated to items a class can use, vs those of another class. It is not possible to get a MW item for another class, but you can certainly get epic level items.
- Item class: Weapon, Skill, consumable, component. The odds appear to be relatively even between them, with the exception of MW drops at end of missions (as per above).
- Item type: If this is a weapon, then what type of weapon.
- Item sub-type: If this is a grenade, then what type of grenade. This factor is important in order to assign the necessary inscriptions.
- Inscriptions: The inscription pool is filtered based on all the items above, so that the inscriptions applied either work specifically for this piece or the entirety of the javelin. Anthem applies a sub level to this step, with major/minor inscriptions, but the logic should be the same.
Each item in the game has a database entry with at least these variables. Each inscription would have a basic yes/no table associated with these variables. In effect, each inscription would have a validation phase that it can indeed be applied to a particular item.
The last patch changed the logic at the item type level. Meaning that if you had a grenade, then you were pulling from the grenade pool. Prior to this, all inscriptions were at the javelin level (the who step). This is a major step forward, as it’s moved down two logic layers. If effectively removed 75% of the “dead stats” in the game, things that provide no value at all.
Anthem doesn’t yet look a the sub-type. Which means you can get inscriptions that apply fire damage to a item that only deals lightning. These are the other 25% “dead stats”. The logic check appears to be when the inscription is applied only to the item, or the entire javelin. In the previous example, if the inscription was to the entire javelin, then it would potentially have some use (e.g gun, or other ability). This is why sub-type is important, as the sub-type would indicate the effects of that particular item.
Let’s use Frost Shards(X) as an example. This is a Storm (A) ability, considered a Blast Seal (B) (the E button on PC). It deals C damage of D type, has E charges, recharges at F rate, applies effect G at a rate of H. It’s MW inscription is I. Each of those letters is a separate table in the database. (For those counting, that’s 8 variables… and this is a simple example.)
Let’s say that you get a drop and you’ve moved all the way down the logic tree to inscriptions. This list of options should include:
- generic traits (e.g. health, damage) at the javelin level (applies to all javelins)
- class traits at the javelin level (applies to only the Storm)
- Type traits at the javelin level (applies only to Blast Seal)
- sub-type traits that only apply to this specific item (X) at either the gear level or the javelin level
Explicitly, it should not be possible to have a gear level inscription on an item that cannot use that inscription. That only works if there’s 1 more logic check in the steps than is current.
As complicated as this post is to read through, the actual implementation is relatively straightforward if and only if their data model supports it. Entirely possible that this level of granularity has not been applied, but given the posts I’ve seen from BW… that would be exceedingly surprising. So cheers on some major progress on loot drops, still a few more steps to go.