Блог пользователя Igor_Parfenov

Автор Igor_Parfenov, история, 13 месяцев назад, перевод, По-русски

1823A - A-characteristic

Разбор
Решение C++
Решение Python

1823B - Sort with Step

Разбор
Решение C++
Решение Python

1823C - Strongly Composite

Разбор
Решение C++
Решение Python
Замечания

1823D - Unique Palindromes

Разбор
Решение C++
Решение Python
Замечания

1823E - Removing Graph

Разбор
Решение C++
Решение Python

1823F - Random Walk

Разбор
Решение C++
Решение Python
Разбор задач Codeforces Round 868 (Div. 2)
  • Проголосовать: нравится
  • +106
  • Проголосовать: не нравится

»
13 месяцев назад, # |
  Проголосовать: нравится +9 Проголосовать: не нравится

Editorial comes out so fast!

»
13 месяцев назад, # |
  Проголосовать: нравится +42 Проголосовать: не нравится

The implementation of F is wrong, you forgot to reduce modulo 998244353. It's causing unexpected verdict in hacks.

»
13 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Problem B was cool.

»
13 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Зәңгәр өсөн рәхмәт, ағай!

»
13 месяцев назад, # |
  Проголосовать: нравится +6 Проголосовать: не нравится

Are you using palindromic tree for checker of problem D?

»
13 месяцев назад, # |
  Проголосовать: нравится +3 Проголосовать: не нравится

Okay, I don't think I could have realistically proved that $$$chain[x] = \left\lfloor \frac{x}{\ell} \right\rfloor$$$ during the contest time limit, but... I think I could definitely have observed the pattern if I just bothered to generate the first 100 Sprague-Grundy values or so and eyeballed them. Everything else was in place except this single component of the solution >_>. I hope I can learn to do this for future Sprague-Grundy problems now.

Anyway, I really enjoyed these problems a lot! I even had fun thinking about F briefly, until I was assured that it's definitely beyond my capabilities. These are nice combinations of standard tools that still require additional creative effort to apply correctly.

Also, I especially appreciate that the implementations are very smooth once the correct idea is understood. For example, the problemsetter could've easily done the jerk move of requiring the program to print the swap sequence in B or the elements of the maximum-length array in C, which would not have really made the problems harder to figure out, but would have been unnecessarily more tedious.

»
13 месяцев назад, # |
  Проголосовать: нравится +6 Проголосовать: не нравится

I don't know about Sprague-Grundy theory. Is there anyone to teach me?

»
13 месяцев назад, # |
Rev. 2   Проголосовать: нравится +8 Проголосовать: не нравится

Thanks so much for there're py solution~

»
13 месяцев назад, # |
  Проголосовать: нравится +1 Проголосовать: не нравится

I don't understand how cycle[x] equals chain[x] up to r+l−1. Could anyone please explain it to me? Thanks

  • »
    »
    13 месяцев назад, # ^ |
    Rev. 5   Проголосовать: нравится +6 Проголосовать: не нравится

    Suppose $$$chain[x] = \frac{x}{l}$$$ is correct, then when $$$l <= x <= l + r - 1$$$, $$$cycle[x] = mex ( chain[(x-r)^+], ..., chain[x−l] ) = mex(0,...\frac{x-l}{l}) = \frac{x}{l} = chain[x]$$$, and when $$$x < l$$$, $$$cycle[x] = chain[x] = 0$$$.

    Edit: The editorial actually has a type, mex should start from $$$chain[(x-r)^+]$$$, not $$$chain[0]$$$.

»
13 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Thought we need to consider $$$l = r$$$ for E, stuck for a long time. Now I am curious, is there a pattern in this case?

»
13 месяцев назад, # |
  Проголосовать: нравится +11 Проголосовать: не нравится

1823F - Random Walk has another solution. Root the tree at $$$t$$$ and consider the following question: suppose we are at a vertex $$$v$$$ at the given moment. What is the expected number of times we will visit it from now on, including the current time?

Well, if we go into any of its $$$\deg(v) - 1$$$ children we must visit it again. Else, if we go up to its parent, the probability we visit it again is exactly $$$1 - \frac{1}{d(v)}$$$ where $$$d(v)$$$ is the depth of $$$v$$$ in the tree. I think the adedalic solution covers an approach to seeing this, but the short answer is induction on the length of the path between $$$t$$$ and $$$v$$$.

Hence, if $$$e(v)$$$ is the expected number of visits to $$$v$$$ given we are there at the current moment satisfies

$$$e(v) = 1 + \left(\frac{\deg(v) - 1}{\deg(v)} + \frac{1}{\deg(v)}\cdot \left(1 - \frac {1}{d(v)}\right)\right)e(v).$$$ (for some reason the displaystyle math environment is acting weird for me)

Simplifying this gives $$$e(v) = \deg(v)\cdot d(v).$$$ However, this doesn't take into account the probability that $$$v$$$ is ever visited! If $$$v$$$ is on the path from $$$s$$$ to $$$t$$$, then it is always visited. Else, suppose that the first time the path from $$$v$$$ to $$$t$$$ intersects with the path from $$$s$$$ to $$$t$$$ is at vertex $$$u$$$. Since $$$u$$$ is always visited, the probability that $$$v$$$ is ever visited is the probability that a random walk from $$$u$$$ on the bamboo/path from $$$v$$$ to $$$t$$$ visits $$$v$$$ before $$$t$$$. But by the same induction, this is exactly $$$\frac{d(u)}{d(v)}$$$.

Hence, if $$$e'(v)$$$ is the true expected number of visits to $$$v$$$, we have $$$e'(v) = e(v) \cdot \frac{d(u)}{d(v)} = \deg(v) \cdot d(u)$$$. Finally, if $$$v = t$$$ we have $$$e'(v) = 1$$$. This is fairly easy to calculate: 203820888

  • »
    »
    12 месяцев назад, # ^ |
      Проголосовать: нравится +5 Проголосовать: не нравится

    Why if we go up to its parent, the probability we visit it again is exactly 1−1 / d(v) where d(v) is the depth of v in the tree. Can you explain to me.

»
13 месяцев назад, # |
Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

The first equation of F may seem obvious, here is my attempt at deriving it rigorously, where $$$x_{\tau}$$$ is position of the chip at time $$$\tau$$$, with initial $$$\tau = 1$$$:

$$$E(c(i))$$$

$$$= E(\Sigma_{\tau=1}^\infty 1_{x_{\tau = i}})$$$

$$$= \Sigma_{\tau=2}^\infty P(x_{\tau = i}) + P(x_1 = i)$$$

$$$= \Sigma_{\tau=2}^\infty \Sigma_{j=1}^n P(x_{\tau} = i | x_{\tau - 1} = j) P(x_{\tau - 1} = j) + 1_{i=s}$$$

$$$= \Sigma_{\tau=1}^\infty \Sigma_{j \in N(i), j \neq t} \frac{1}{|N(j)|} P(x_{\tau} = j) + 1_{i=s}$$$

$$$= \Sigma_{j \in N(i), j \neq t} \frac{E(c(j))}{|N(j)|} + 1_{i=s}$$$

Notice the special $$$1_{i=s}$$$ at the end. Also, if $$$i = t$$$, we need to change the definition so that $$$|N(t)| = 1$$$. Imagine that it then goes into some other absorbing state, so $$$t$$$ is visited only once.

»
13 месяцев назад, # |
Rev. 3   Проголосовать: нравится 0 Проголосовать: не нравится

For problem F, we have

$$$E(c(u)) = sum_{(u, v)}\ E(c(v)) * P(v, u) \ with \ P(v, u) \ is \ the \ probability \ that \ v \ goes \ to \ u$$$

=> E(c(u)) and E(c(v)) depends on each other and this problem is presented on a tree so we can assume that

$$$E(c(u)) = free(u) + E(c(p)) * g(u)$$$

with free(u) is a constant cofficient, g(u) is the "familiarity" rate with the parent p. Our goal is to find free(u) and g(u), so how can we do that ? We have

$$$E(c(u)) = sum_{(u, v)} (free(v) + E(c(u)) * g(v)) * P(v, u) + E(c(p)) * P(p, u)$$$

<=>

$$$(1 - sum_{(u, v)} g(v) * P(v, u)) * E(c(u)) = sum_{(u, v)} free(v) * P(v, u) + E(c(p)) * P(p, u)$$$

with v is a child of u.

We run through a DFS to calculate free(u) and g(u) and then a DFS to calculate the final answer.

Short solution
»
13 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Very fun contest! Both problem D and E have quite interesting solutions (I didn't attempt problem F).

»
13 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Thank you for task E, it is very nice!

»
12 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

A was too hard.

»
12 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

I downvoted the editorial since I felt there was a big gap in difficulty between Problem C and D. 1300 to 1800 ? Igor_Parfenov People like me do exist in this range