Coefficient hunting in gargantuan generating functions
How to win Arcs and influence polynomials
How to win Arcs and influence polynomials
This post is the thrilling conclusion to How to conquery the galaxy (probably), in which we discussed how the dice pools used in the board game Arcs can be modeled with things called generating functions. Definitely go check that out if you haven’t already!
At the end of the last post, we found that even for very simple pools of Arcs dice, the resulting generating function can quickly explode into hundreds of monomials, which can be quite inefficient to calculate and analyze. To find out how inefficient, I did some sketchy back-of-the-envelope complexity calculations, and found if you’re naively multiplying polynomials of some maximum degree (where the degree of a multivariate polynomial is equal to the sum of its variable’s degrees), you end up with a runtime complexity of . A process whose runtime grows slower exponentially with the size of its input (in our case, the number of dice in the pool) is called an exponential time algorithm, and is terrible.
There are indeed faster ways of multiplying polynomials than naively going term-by-term, but from what I’ve read, these techniques involve fairly advanced math and tend to only become very efficient for extremely high values of . Our problem scales with the number of dice , so the increase in complexity didn’t seem worth it.
So, can we do better? Well, as you may have noticed, in the last post I included a sort of dice pool calculator at the top which (hopefully) runs fairly quickly. Let’s take another look at it:
As you add more dice to this, you’ll notice that the resulting generating function is displayed in an unexpanded form. That’s because the calculator isn’t expanding the polynomial at all! Instead, under the hood, all it’s doing is evaluating the unexpanded (and cheap to compute) generating function on a single value, and using the result to recover every coefficient of the fully expanded polynomial. How is this possible?
A reasonable guess might be polynomial interpolation, which allows you to recover the unique polynomial that passes through a given set of points. However, for a single-variable polynomial of degree , you need to compute points to perform interpolation. Nevermind the fact that interpolation is much more complicated for multivariate polynomials, but the dice calculator is only computing the result of a single point, so this clearly isn’t what’s going on.
Instead, the calculator is exploiting two key facts about our generating functions:
And finally, because we can easily calculate the number of outcomes for some set of dice rolls, if we multiply the unexpanded polynomial by , we can arrive at a polynomial whose coefficients are all guaranteed to be positive integers. Let’s call this multiplied polynomial a scaled generating function, since we’re scaling our probability-based function by the total number of outcomes.
And with that, we can finally do one weird trick (number theorists hate him) to recover all of the expanded polynomial’s coefficients.
I have to admit, I didn’t find this one on my own. After googling some mishmash of words like “extract integer polynomial coefficients”, I stumbled across this r/math post about a technique for determining a polynomial from just two inputs. The author admits they didn’t find this on their own, and cites another r/math post in a thread about math “magic tricks”, which itself cites a blog post on the trick, which itself cites a MathOverflow answer. That post, by MathOverflow user Aeryk in 2012, is as far as I can tell the earliest mention of this trick, so until I learn of an earlier discoverer I’ll call this technique Aeryk’s Sieve.
Here’s how it works. Say you’ve got a single-variable polynomial whose coefficients are all positive integers:
Let’s see what happens if we calculate :
Notice anything interesting about the answer, ? Those are our coefficients , , and , but in reverse! I’ll color-code the coefficients to make them easier to pick out:
Let’s try a different, bigger example. This time we’ll calculate :
Same thing, each of our coefficients end up encoded in the result! So what’s going on here?
Aeryk’s Sieve is exploiting the fact that if you squint hard enough, polynomials look a lot like numbers. What I mean by that is that the definition of a base 10 number whose digits are is:
Which lines up with our first example quite nicely. By evaluating that one at , we end up with a 4 digit result whose digits are the coefficients, because that’s the definition of a 4 digit number. But of course, base 10 numbers only work for digits between and — what if we want digits higher than that like, say, in our second example?
Well, we simply pretend that the polynomial is a base number, and hence calculate at . You’ll note that this nicely accounts even for the single-digit , which gets automatically zero-padded. If we tried evaluating it at 10 instead, the result ’s digits don’t line up with our coefficients anymore, since the coefficients are large enough to “overlap” in the base 10 result.
Since Aeryk’s Sieve works by interpreting the polynomial as an arbitrarily based number, it doesn’t actually need to be evaluated on only powers of 10; doing so just makes for nicely formatted results that we can read in base 10. So finally we can state the general form of the sieve: For any polynomial with integer coefficients, and the largest of which is , if then can be interpreted as a base number whose digits are the coefficients of .
But, of course, there’s a snag here: we’re using the sieve because we don’t know our generating function’s coefficients, so how can we choose an appropriately sized ? Well, note that the original reddit post claimed we needed two evaluations, not one.
We’ll use our first evaluation to set , exploting the fact that for any polynomial is equal to the sum of its coefficients, and is thus greater than any single coefficient (the is necessary to handle monomials). Then, we calculate which is interpretable as a base number whose digits are the coefficients of . Alternately, if we’d like to keep the nice property of reading our digits off like a base-10 string, we can simply set equal to the lowest power of 10 larger than .
Let’s remind ourselves why we’re doing all of this this: we want to know the probabilities of specific dice pool outcomes, which in Generating Function Land is equivalent to finding the coefficients of a particular power of our function’s variable(s). We’ll cover our multivariate Arcs functions in a bit, but to simplify things, let’s reconsider the humble standard 6 sided die, whose generating function is:
Since the seive requires a function with positive integer coefficients, let’s produce a scaled generating function by multiplying this by 6, the total number of roll outcomes:
Now, suppose we wanted to find the probability of getting 15 when rolling 4 dice. In other words, we want to know the coefficient of in the expanded function .
Using the conveniently power-of-10-based version of Aeryk’s Sieve, we’d first calculate the smallest power of 10 greater than . Despite not knowing the expanded form of the polynomial, this is easy to calculate: simply do (can you see why this will be the case for any 6 sided die?) and raise that to the fourth power to get . The next highest power of 10 is , so we calculate:
Which is pretty big as far as numbers go. But hopefully you can sort of pick out blocks of digits spaced 4 digits apart. To make it clearer, let me add some leading zeroes and whitespace:
So, if the sieve works, each of these 25 blocks should represent the coefficients of in reversed order, i.e. if we number the blocks from 1 to 25, the th block should be the coefficient of . Does this pass the sniff test?
Well, I couldn’t find a decent ground truth for this kind of roll’s probability distribution, but manually (read: with WolframAlpha) expanding the polynomial gives us:
And yeah, the numbers line up! Honestly, the first time I saw this working, it felt pretty magical despite having such mundane inner workings. But hey, many such cases.
So, is there an easy way to pluck a specific coefficient from our giganumber, like say the one for ? Since we know the base , what we’re looking for are the 4 digits starting at . This is pretty easy to get using a divide and modulo operation:
By dividing our massive number by , we’re chopping off the leading 16 (15 powers of x + 1 constant) coefficients, each of which is composed of 4 digits, so a total of digits. Then, we use the modulo operator to lop off every digit left of the 10,000th place, leaving us with just 140, which just so happens to be the coefficient to x^15! This means of all the possible rolls of 4 dice, 140 of them result in a 15. To turn that into a probability, we just have to divide by the total number of possible rolls, which for a scaled generating function is always just , and in this case is . So, our probability is , or around . Success!
To restate this process generally, for any scaled dice generating function , to calculate the probability of rolling , we:
And while this is a great trick, it only works for polynomials with one variable. Arcs’ functions, however, have five. So what do we do?
We make the numbers bigger. To illustrate this, let’s use a relatively simple function of two variables:
Since we have two variables, let’s assume we need to choose two bases, and in a process similar to the 1 variable case. As in our first example, all the coefficients are under , so should work fine. Let’s see what we get if we partially evaluate the function at :
Well hey, now this looks like a function over the single variable ! Let’s use our imagination and pretend that’s the case, then repeat the proces: we have to choose a new base that’s larger than , so would do. Now let’s see what happens if we calculate the whole thing with our two bases :
Interesting… it does look like our digits are all accounted for in here. But how are they distributed within the number?
Well, let’s say we’re looking for the digit corresponding to the term, which should be . Let’s handle the first, and since the exponent is , we’ll need to divide and mod by :
Promising. Not only is 4 in this, but as expected, it matches the coefficient for in our partially evaluated above. Now, moving on to , we’d divide and mod by :
And bingo, we’ve successfully plucked the expected coefficient for a 2-variable term. As we did for the single-variable sieve, let’s recap what we did here in a general format:
Suppose we’ve got a 2-variable scaled generating function :
While I can’t take credit for Aeryk’s Sieve, as far as I could tell this iterative approach to handling multivaraite polynomials isn’t mentioned anywhere else. And while I haven’t formulated a proof for this, in my testing this approach seems to extend to any number of variables.
And even though our Arcs functions are indeed of five variables, for the purposes of the calculator I only ever need to pluck terms for two variables at a time: one for each of the horizontal and vertical axes of the chart it generates. A third variable could let me create some sorta 3D distribution visualizer, but honestly anything more than two seemed like overkill to me.
By passing to the other three variables, we effectively collapse together all terms that differ by those variables, allowing us to group the probabilities of just the two we’re interested.
And that’s it! You’re now able to fully calculate the vast array of tactical possibilities offered by any dice roll in Arcs.
no