CF #526 Div.1 E: The Fair Nut and Rectangles

If we may use 128bit integers, this problem will be easier than the original one. The most difficult task is the comparison of two rational numbers with large digits. There exists a sophisticated method described below for solving this task without any floating point error. The time complexity is O(log n) because this method is almost the same as the Euclidean algorithm.

Given two rational numbers a/b, c/d. For convention, assume that both are positive. We first compare the integral parts. If they are not equal, we then compare the fractional parts. The comparison of integral parts is trivial. How do we compare the fractional parts without floating point numbers? If integral parts are equal, we just want to know whether the following inequality holds.

\[ \frac{a \bmod b}{b} < \frac{c \bmod d}{d}. \]

This is equivalent to

\[ \frac{b}{a \bmod b} > \frac{d}{c \bmod d} \]

if both a mod b and c mod d are not zero. Thus, applying this reformulation repeatedly, we finally obtain the desired result.

#include <cassert>

// b and d must be positive
bool cmp(long long a, long long b, long long c, long long d) {
    long long q1 = a >= 0 ? a / b : (a - b + 1) / b;
    long long q2 = c >= 0 ? c / d : (c - d + 1) / d;
    if (q1 != q2) return q1 < q2;
    long long r1 = a - b * q1;
    long long r2 = c - d * q2;
    if (r2 == 0) return false;
    if (r1 == 0) return true;
    return cmp(d, r2, b, r1);

bool cmp2(long long a, long long b, long long c, long long d) {
    return a * d < c * b;

int main() {
    for (int i = -100; i <= 100; i++) {
        for (int j = 1; j <= 100; j++) {
            for (int k = -100; k <= 100; k++) {
                for (int l = 1; l <= 100; l++) {
                    assert(cmp(i, j, k, l) == cmp2(i, j, k, l));

Compute exp and log of formal Dirichlet series in O(n log^2 n) time

Maybe this already exists and some of them are unreliable. The exponential of a formal Dirichlet series can be written as

\exp\left( \frac{a_2}{2^s} + \frac{a_3}{3^s} + \frac{a_4}{4^s} + \cdots \right) = \exp\left( \frac{a_2}{2^s} \right) \exp\left( \frac{a_3}{3^s} \right) \exp\left( \frac{a_4}{4^s} \right) \cdots.

If we implement carefully, we can obtain an O(n log n) time algorithm (?) On the other hand, we compute the logarithm straight forwardly.

\sum_{k=1}^{\lfloor \log n \rfloor} (-1)^{k - 1} \left( \frac{a_2}{2^s} + \frac{a_3}{3^s} + \cdots \right)^k

The following relation gives more efficient algorithm but it is difficult to compute over Fp because they have logarithms of R (?)

\log(f(x))' = \frac{f'(x)}{f(x)}

Codeforces #519 F. Make It One

UPD: November 11, 2018

I believe this property always holds, but I haven't shown the correctness yet. For small n, we can check the correctness partially: Perhaps, the Dirichlet convolution and the Möbius inversion are related to this technique.

Dirichlet convolution - Wikipedia
Möbius inversion formula - Wikipedia
Dirichlet series - Wikipedia

This problem can be solved by the zeta transform and the Möbius transform. The zeta transform is defined as

\zeta f(i) = \sum_{i \mid j} f(j).

The Möbius transform is defined similarly. Then, the convolution of two sequences is

(f \ast g) (k) = \sum_{(i,j) = k} f(i)g(j).


\zeta (f \ast g) (k) = \zeta f(k) \zeta g(k).

Thus, we can compute the first n terms of $\underbrace{f\ast\cdots\ast f}_{k}$ in O(n log n + n log k). My solution: For avoiding collisions, we pick some primes randomly and compute values over Fp.

According to, we should type not Mobius but Moebius when we cannot use umlaut letters. Please forgive me.

UPD: In custom invocation of codeforces, the following code always output the same results.

#include <iostream>
#include <ctime>
#include <random>

using namespace std;

int main() {
    cout << time(NULL) << endl;
    cout << (void *)main << endl;
    cout << random_device()() << endl;

You may think we cannot create a random seed, but actually this behavior doesn't appear in the main judge. You can see time(NULL) returns different values.

Exponential Function Evaluation

This article shows an implementation of the exponential of a formal power series. This runs in O(n log n) time. I compute the nth term of the partition function for benchmark.

[1] is a standard reference of formal power series (?) Newton's method is described in [2] but the correctness is not discussed. Please try it yourself. It is a little difficult but not impossible.

[1] Ivan Niven, Formal Power Series, The American Mathematical Monthy, vol. 76, pp. 871--889, 1969.
[2] R. P. Brent, Multiple-precision zero-finding methods and the complexity of elementary function evaluation,

Analysis of path halving for DSU

Consider the following implementation of DSU. This approach is shown in [1, 2].

int find(int x) {
  while (p[p[x]] != p[x]) x = p[x] = p[p[x]];
  return p[x];

void link(int x, int y) {
  p[find(x)] = find(y);

Surprisingly, both find and link take O(log n) time amortized (splay tree takes a similar approach). We define the potential of x as the logarithm of the size of x, that is ln(x). Here, the letter x means the size of x. The entire potential is the sum of potentials of all nodes. Let us consider the following operation.

path halving

This operation changes the size of beta but does not change any other nodes. Therefore, the delta of the potential is

&=\ln(\beta - \gamma) - \ln(\beta) \\
&=\ln\left( 1-\frac{\gamma}{\beta} \right) \\
&\le -\frac{\gamma}{\beta}.

If γ < β/2, this means beta-gamma is a light edge. Since there exist at most O(log n) light edges, the total cost of this case does not exceed O(log n). if γ ≥ β/2, the difference of the potential is less than or equal to -1/2. Thus, find takes O(log n) amortized.

[1] R. E. Tarjan, J. V. Leeuwen, Worst-case analysis of set union algorithms, Journal of the ACM, 31(2), p.p. 245--281, 1984.
[2] R. E. Tarjan, Data Structures and Network Algorithms, Society for Industrial and Applied Mathematics, 1983.

figure 1:
figure 2: