Determining the legality of shengji plays
I’ve been working on my online version of the shengji card game for a little while now. One of the more interesting aspects of the game is how a legal play is defined, and the corresponding algorithm to compute that.
Problem overview⌗
Shengji is a trick-taking game, which means that in each round, every player plays the same number of cards. The first player in the trick sets the “format” of the trick, which subsequent players must attempt to follow. Only players whose plays match the format can win the trick (and then start the next trick).
For example, let’s say that a trick starts with 6♤. This implies the following:
- The suit to evaluate the trick format within is spades (♤).
- Each player must play 1 card
So, a play which matches the trick format would be a single card in spades (e.g. 8♤).
If a player has no spades, then the player may play any single card. And, if they play a single card in trump, then that is also a legal play.
The same goes if the trick starts with a pair of 6♤, with a twist:
- if the player can play a pair of spades, they must. otherwise,
- if the player can play a single spade and a non-spade card, they must. otherwise,
- the player must play two non-spade cards.
The overall algorithm is, therefore:
- if the play matches the format of the trick, it is legal.
- if the play does not match the format of the trick, but the player can match the format of the trick, then the play is illegal.
- if the player matches the next-weaker trick format, then it is legal.
- if the play does not match the next-weaker format of the trick, but the player can match the next-weaker format of the trick, then the play is illegal.
- repeat steps (3) and (4) until there is no next-weaker trick format.
The two key components of the algorithm are the function to find the next-weaker trick format, and the function to determine whether a given play matches a trick format.
fn is_playable(format, play, hand) -> Mapping {
let mut format = format;
loop {
if let Some(mapping) = matches(format, play) {
return mapping;
}
if let Some(_) = matches(format, hand) {
return ILLEGAL_PLAY;
}
if let Some(weaker) = decompose_format(format) {
format = weaker;
} else {
return ILLEGAL_PLAY;
}
}
}
What is a trick format?⌗
The legal leading trick formats in Shengji are:
- n-tuples, e.g. singles, pairs, triples, etc of identical cards
- n by m tractors, i.e. a sequence of m consecutive n-tuples
- “throws” which consist of a collection of (1) and (2).
However, there are cases where we need to represent “ragged” consecutive tuples:
Suppose that the actual trick format is a 3x2 tractor (i.e. two consecutive triples), and the player has 334JQKKKAA in their hand. The player doesn’t have KKKAAA, so they don’t themselves have a 3x2 tractor. They also don’t have two triples (e.g. JJJ and AAA), so they can’t play that, either.
If we compute the next-weaker format as a tractor and two random cards, then the player is required to play KKAA and would likely choose 4J. This is much weaker than intended, though, since the player has KKK, and we’d want to force out all 3 K’s. Alternatively, if we compute the next-weaker format as a triple and a pair, the player could play KKK and 33… but we’d want to force out the KKKAA.
Therefore, we express a trick format, generically, as a multiset of sequential
tuple-lengths, i.e. Multiset<List<usize>>
, and a play which matches the trick
format as a Multiset<List<List<Card>>>
.
What’s a multiset?
A multiset is a set which allows elements to be duplicated, but which does not have a particular ordering between elements.
For example, you could have a set
{2,3}
, but adding another 2 would result in the same set. In a multiset, you would have{2: 1, 3: 1}
, and adding another 2 results in{2: 2, 3: 1}
. The astute reader notes that this is just a particular interpretation of a map from value to multiplicity.
Some examples:
-
a pair is represented as
{[2]}
, and a single and triple as{[1]}
and{[3]}
, respectively. -
a standard 2x2 tractor is represented as
{[2, 2]}
, and a 3x2 and 2x3 tractor as{[3, 3]}
and{[2, 2, 2]}
, respectively. -
a throw of a pair and a single card is represented as
{[2], [1]}
.
How can we tell if a play matches a trick format?⌗
This can be solved recursively with some dynamic programming:
The simplest case of the play-matching problem is a trick format of {[n]}
: we
just need to check if there is an n tuple in the play.
If there are multiple tuples that need to be matched, i.e. {[m], [n], [o], [p]}
, we could match a single tuple, and then remove the cards used for that
match from the play. This might look something like:
1. {[m], [n], [o], [p]} | play
2. {[n], [o], [p]} | play - [m]
3. {[o], [p]} | play - [m] - [n]
... | ...
?. {} | {}
If we fail to reach the empty-format/empty-hand case, the play does not match
the format. However, this can be path dependent. If the format is {[1], [2]}
and the cards are AKK
, the mapping {[A], [KK]}
matches the format, but the
mapping {[K], [AK]}
does not. Therefore, we need to explore each possible
ordering of fulfilling an element.
Naively, this could be a search problem: we could choose to start on each of
[m]
, [n]
, [o]
, or [p]
, and we’d end up visiting all possible orderings.
But, in tractor, m
, n
, o
, and p
are actually often correlated: most
throws contain a larger component and then a bunch of single cards and a couple
of pairs; the fully-decomposed (more on this later) version of every trick
format is only single cards.
fn matches(format, cards) -> Option<Mapping> {
let mut visited = {};
let mut queue = [];
queue.push({});
while let Some(assigned) = queue.pop() {
if visited.contains(assigned) {
continue;
}
visited.insert(assigned);
let remaining_groups = format - assigned.formats();
if remaining_groups.is_empty() {
return assigned;
}
let remaining_cards = cards - assigned.cards();
for group in remaining_groups {
let options = remaining_cards.find_possible_options(group);
if options.is_empty() {
continue;
}
queue.extend(options.map(|option| assigned + option));
}
}
return None;
}
This means that we’d often be visiting the same subproblem! This can get pretty computationally expensive, even with a visited set to skip out of the expensive part.
Presume the case where m = n = o = p = 1
, and we have 4 cards [abcd]
. The
naive approach would have us explore far too many matchings! Furthermore, we
actually know up front that we need to assign unique cards to each component,
i.e. card a
can be used for only one of [m]
, [n]
, [o]
, [p]
.
Multiset counting⌗
It turns out this is actually a subproblem we can solve directly!
The basic set selection problem asks us to select a subset of a specific cardinality from another set. We do this by assigning each item in the set to a specific zero-indexed number, and then representing an assignment as a binary number where each of the bits in the binary number correspond to an item in the set.
It looks something like this:
a b c d
| | | |
index: 0 1 2 3
such that the set of all subsets is
0000 {}
0001 {d}
0010 {c}
0011 {cd}
0100 {b}
0101 {bd}
0110 {bc}
0111 {bcd}
1000 {a}
1001 {ad}
1010 {ac}
1011 {acd}
1100 {ab}
1101 {abd}
1110 {abc}
1111 {abcd}
If we filter this to only the numbers where the total number of 1
s is 2, we
get the set of cardinality-2 subsets!
To expand this to multisets, we need to be able to (a) represent the number of times an element shows up in the subset, and (b) skip counting invalid subsets, i.e. those where we use an element more times than it existed in the original multiset.
For example, let’s say that there are two of a
and c
, three of b
, and
only one of d
:
a b c d
| | | |
index: 0 1 2 3
multiplicity: 2 3 2 1
Again, we represent this as a number where each index corresponds to a digit. However, rather than the digit being a binary digit (0 or 1), we let the digit represent the number of times that index appears in the resulting set – and we skip the cases where the digit exceeds the multiplicity of the index.
0000 {}
0001 {d}
0010 {c}
0011 {cd}
0020 {cc}
0021 {ccd}
0100 {b}
0101 {bd}
0110 {bc}
0111 {bcd}
0120 {bcc}
0121 {bccd}
0200 {bb}
0201 {bbd}
0210 {bbc}
0211 {bbcd}
0220 {bbcc}
0221 {bbccd}
0300 {bbb}
0301 {bbbd}
0310 {bbbc}
0311 {bbbcd}
0320 {bbbcc}
0321 {bbbccd}
1000 {a}
1001 {ad}
1010 {ac}
1011 {acd}
...
2321 {aabbbccd}
This allows us to solve all of the n-tuple matches without needing to search each path one at a time.
As an optimization, since we know that we’re always looking for matches with a sub-multiset cardinality of a given play (players must always play the same number of cards as the format), we can skip many of the possible subsets in the enumeration by moving a count to a new index (i.e. simultaneously decrementing one index and incrementing another) rather than always incrementing the right-most index and carrying the overflow.
Tractors⌗
With multiset-counting in hand, the remaining trick-formats are cases where there are sequential tuples to handle, i.e. tractors and tractor-like formats. We can enumerate all the viable tractors in a given hand quickly – hands aren’t that big – but tractors have a non-local effect (choosing one tractor may preclude other tractors or tuples) that gets in the way of tricks like the multiset-counting approach.
It’s very rare that a player has many disjoint tractors, and that those tractors match the format, so suffice to solve this using the search-based approach of exploring every possible matching tractor.
fn matches(format, cards) -> Option<Mapping> {
let mut visited = {};
let mut queue = [];
queue.push({});
while let Some(assigned) = queue.pop() {
if visited.contains(assigned) {
continue;
}
visited.insert(assigned);
let remaining_groups = format - assigned.formats();
if remaining_groups.is_empty() {
return assigned;
}
let remaining_cards = cards - assigned.cards();
for (group, multiplicity) in group_groups_by_shape(remaining_groups) {
// tuple-case
let options = if group.len() == 1 {
remaining_cards.assign_multisets_for_tuples(group[0], multiplicity);
} else {
remaining_cards.find_possible_options(group);
}
if options.is_empty() {
continue;
}
queue.extend(options.map(|option| assigned + option));
}
}
return None;
}
Lazy computation and other optimizations⌗
In some cases, we want to know if there is any way for a player to meet the specified trick format, e.g. to evaluate the legality of a given play. In other cases, we want to enumerate all the viable matchings, e.g. when determining the winner of a throw. And sometimes, we want to find the first viable matching, e.g. when surfacing a hint to the player for a legal play.
To avoid code duplication, we want to implement the previously described play-matching technique as an incremental iterator, and do as little of the computation up front as possible. After all, this code path is used in the UI to show the user whether a play is legal or not, so we’ve only got a single browser thread that we don’t want to block.
Thankfully, this isn’t too hard: we store the mappings and in-progress search in the iterator’s state, and only emit complete mappings.
We can also bail out of the higher-level search if we know that there is any remaining group which is unsatisfiable – after all, there’s no value to getting a mostly matching set, since the final return value is still going to be “no match”.
How do we find the next-weaker trick format?⌗
Recall that there are actually two components to play legality: whether a given play matches a given trick format, and a way to get the next-weaker trick format from the given trick format.
The next-weaker format is an ill-defined concept: especially with larger groups of cards, there are multiple possible decompositions at each stage.
What requirements do we have on decomposition sequences?⌗
For example, if we start with {[3]}
, we should clearly decompose to
{[2], [1]}
and then {[1], [1], [1]}
. However, if we start with {[4]}
, we
want to include {[3], [1]}
, {[2, 2]}
, {[2], [2]}
, {[2], [1], [1]}
, and
{[1], [1], [1], [1]}
in the sequence.
Note that we may include a tractor in the decomposition sequence, even if the original format did not include a tractor! This corresponds to the general rarity of high-cardinality n-tuple plays; they are harder to get and thus “more powerful” than tractors.
If we look carefully at a decomposition sequence, we can see that the decomposition is constructed to be a function only of the number of cards! That is, the decomposition for a pair of pairs is the suffix of the decomposition for a quadruple. This is a helpful property for dynamic programming: it means that we can index into our cache with fewer parameters. Furthermore, it guarantees consistency as the game scales to more decks.
An additional complication occurs in the six-card-or-above decomposition: there are two six-card tractors (2x3 and 3x2), both of which must be visited in the sequence. In fact, the six-card sequence is
{[6]} six-of-a-kind
{[5], [1]} five-of-a-kind and an unrelated card
{[4, 2]} four-of-a-kind followed by a pair
{[2, 4]} pair followed by a four-of-a-kind
{[4], [2]} four-of-a-kind and a pair
{[4], [1], [1]} four-of-a-kind and two unrelated cards
{[3, 3]} two sequential triples (tractor)
{[3], [3]} two triples
{[3, 2], [1]} a triple followed by a pair, and an unrelated card
{[2, 3], [1]} a pair followed by a triple, and an unrelated card
{[3], [2], [1]} a triple, a pair, and an unrelated card
{[3], [1], [1], [1]} a triple and three other cards
{[2, 2, 2]} three sequential pairs (tractor)
{[2, 2], [2]} two sequential pairs (tractor) and a pair
{[2], [2], [2]} three pairs
{[2, 2], [1], [1]} two sequential pairs (tractor) and two unrelated cards
{[2], [2], [1], [1]} two pairs and two unrelated cards
{[2], [1], [1], [1], [1]} one pair and four unrelated cards
{[1], [1], [1], [1], [1], [1]} six unrelated cards
By storing the subproblem results in a table, we can avoid re-computing this lookup on every evaluation – and we evaluate this on both the client and the server, so it gets called pretty often.
Setting up the problem⌗
The decomposition problem starts with a particular play requirement, e.g. a
tractor of sequential triples {[3, 3]}
. From the above table of the
six-of-a-kind decomposition, we need to generate the following sequence:
{[3, 3]} two sequential triples (tractor)
{[3], [3]} two triples
{[3, 2], [1]} a triple followed by a pair, and an unrelated card
{[2, 3], [1]} a pair followed by a triple, and an unrelated card
{[3], [2], [1]} a triple, a pair, and an unrelated card
{[3], [1], [1], [1]} a triple and three other cards
{[2, 2, 2]} three sequential pairs (tractor)
{[2, 2], [2]} two sequential pairs (tractor) and a pair
{[2], [2], [2]} three pairs
{[2, 2], [1], [1]} two sequential pairs (tractor) and two unrelated cards
{[2], [2], [1], [1]} two pairs and two unrelated cards
{[2], [1], [1], [1], [1]} one pair and four unrelated cards
{[1], [1], [1], [1], [1], [1]} six unrelated cards
For the purposes of decomposition, we can separate out all of the groups of a single card – that component is fully decomposed.
Simple cases⌗
Let’s start with a simple decomposition of a triple:
{[3]} We start with the triple
{[2], [1]} We can split the triple into a pair and a different card
{[1], [1], [1]} And then we can split the pair into two more cards
We should also look at the decomposition for the basic tractor (two pairs in sequential order)
{[2, 2]} Start with the tractor
{[2], [2]} Break the adjacency requirement
{[2], [1], [1]} Break one pair into singles
{[1], [1], [1], [1]} Break the other pair into singles
So far, so good. We start to notice a pattern here: the decomposition problem is equivalent to enumerating every valid format with the right total number of cards, and then removing the “more powerful” prefix.
To enuemrate the valid formats, we need to:
- find the possible tuples that can make up the total number of cards (i.e. partitioning the set of cards into tuples)
- enumerate all valid sequential groupings of tuples into tractor-like constructs
fn full_decomposition_ordering(num_cards) -> Vec<PlayRequirements> {
let full_decomp = [];
for grouping in find_tuple_partitions(num_cards) {
let (single_cards, larger_tuples) = grouping.partition(|v| v == 1);
if larger_tuples.is_empty() {
full_decomp.push(single_cards);
} else {
for grouped_larger_tuples in group_into_sequential_tuples(larger_tuples) {
full_decomp.push(grouped_larger_tuples + single_cards);
}
}
}
full_decomp.unique()
}
Partitioning cards into tuples⌗
This is a recursive algorithm: we start with the simple case (there’s only one way to make partitions for a single card). We then can take the next-smaller set of partitions, and generate the new set: one item for each of the smaller groups where one of the tuples is larger by one, and one where we add a new single-card tuple at the end.
fn find_tuple_partitions(num) {
if num == 1 {
return [[1]];
}
let smaller = find_tuple_partitions(num - 1);
let mut groupings = [];
for g in smaller {
for v in g {
groupings.push(g - [v] + [v + 1]);
}
groupings.push(g + [1]);
}
groupings
}
This gets us the following:
find_tuple_partitions(4):
[
[3, 1]
[2, 2]
[2, 1, 1]
[1, 1, 1, 1]
]
Generating partitions of sequential tuples⌗
We then need to take these partitions and combine them into valid sequential tuples, i.e. tractors and tractor-like sequential plays.
This can also be done recursively:
- the base case is the partitioning of a partition of only one item, which is itself
- the partitions for size
k
are the partitions for sizek-1
, where the additional item is inserted into an existing partition, as well as the case where the additional item creates a new partition. In fact, this is essentially the same recurrence as used for finding the tuple partitions in the first place, just applied to a different context.
For example: if we are computing the potential partitions for the group [0, 1], the following are possible:
sequential_partition([0, 1])[0]:
----------------
| item 0, item 1 |
----------------
sequential_partition([0, 1])[1]:
-------- --------
| item 0 | | item 1 |
-------- --------
we can construct the partitions for [0, 1, 2]
by:
- adding the new item to each of the existing groups from each of the existing partitions
- adding the new item as a new group to each of the existing partitions
Applying the first rule, we get:
sequential_partition([0, 1, 2])[0] = sequential_partition([0, 1])[0] + item 2:
---------------- ------------------------
| item 0, item 1 | ==> | item 0, item 1, item 2 |
---------------- ------------------------
sequential_partition([0, 1, 2])[1] = sequential_partition([0, 1])[1] + item 2:
-------- -------- ---------------- --------
| item 0 | | item 1 | ==> | item 0, item 2 | | item 1 |
-------- -------- ---------------- --------
sequential_partition([0, 1, 2])[2] = sequential_partition([0, 1])[1] + item 2:
-------- -------- -------- ----------------
| item 0 | | item 1 | ==> | item 0 | | item 1, item 2 |
-------- -------- -------- ----------------
and, applying the second, we add two more:
sequential_partition([0, 1, 2])[3] = sequential_partition([0, 1])[0] + item 2:
-------- -------- -------- -------- --------
| item 0 | | item 1 | ==> | item 0 | | item 1 | | item 2 |
-------- -------- -------- -------- --------
sequential_partition([0, 1, 2])[4] = sequential_partition([0, 1])[1] + item 2:
---------------- ---------------- --------
| item 0, item 1 | ==> | item 0, item 1 | | item 2 |
---------------- ---------------- --------
Using this on the earlier case of four cards gets us
sequential_partition(find_tuple_partitions(4)):
[
[[3], [1]]
[[2, 2]]
[[2], [2]]
[[2], [1], [1]]
[[1], [1], [1], [1]]
]
which is almost exactly what we need! The one last piece is that, when we’re looking at five or more cards, there’s the potential case where we require sequential triple-and-pair… but our above-listed algorithm gives us only adjacent triple-and-pair. We can solve this by including every unique permutation of adjacent tuples, and then we’re done!
Some caching, too⌗
In practice, almost all games of shengji only have a few decks, so there aren’t that many possible partitionings. We can throw these recursive computations into a lookup table so that we can respond even more quickly in the UI.
Amusingly, I actually got a bug report complaining about slow performance when someone played thirteen cards in a single play. As it turned out, I had been using React to render the “help” page, which would eventually show the player a potential valid match for every trick-format in the full decomposition. At the time, the implementation used only the naive search approach, and it turns out there’s a lot of different orders in which you can randomly select thirteen cards. Somewhat unusually, then, the optimizations and algorithms discussed here were actually necessary to enable these larger games!