Блог пользователя Vladithur

Автор Vladithur, история, 9 месяцев назад, По-английски

Hope you liked the problems!

1856A - Сказка о сортировке

Hints
Tutorial
Solution
Feedback

1856B - Хорошие массивы

Hints
Tutorial
Solution
Feedback

1856C - Стать максимумом

Hints
Tutorial
Solution
Feedback

1856D - Более неправильно

Hints
Tutorial
Solution
Feedback

1856E1 - ПерестановДерево (простая версия)

Hints
Tutorial
Solution
Feedback

1856E2 - ПерестановДерево (сложная версия)

Hints
Tutorial
Solution
Feedback

UPD: Tutorial for E1 has been added.

UPD2: Tutorial for E2 has been added.

  • Проголосовать: нравится
  • +195
  • Проголосовать: не нравится

»
9 месяцев назад, # |
  Проголосовать: нравится +30 Проголосовать: не нравится

Can non-custom bitsets even pass on E2? if not the problem is just bad imo

  • »
    »
    9 месяцев назад, # ^ |
      Проголосовать: нравится +35 Проголосовать: не нравится

    The model solution doesn't use custom bitsets.

  • »
    »
    9 месяцев назад, # ^ |
      Проголосовать: нравится +12 Проголосовать: не нравится

    I think the only "custom" thing it requires is modifying bitset's size via templates.

  • »
    »
    9 месяцев назад, # ^ |
      Проголосовать: нравится +18 Проголосовать: не нравится

    I did not use the bitset, it squeezed in 3s

    • »
      »
      »
      9 месяцев назад, # ^ |
        Проголосовать: нравится +8 Проголосовать: не нравится

      Me too. I used bitset to store the dp, but I didn’t use bitwise operations. I put the knapsack part out of the dfs(which I think will be faster), and my solution only executed to 2105ms.

      My code

      • »
        »
        »
        »
        9 месяцев назад, # ^ |
        Rev. 3   Проголосовать: нравится +3 Проголосовать: не нравится

        I want to know why I use normal arrat instead of bitset and I passed the test .My solution only executed to 1903ms.

        Maybe my English is so bad ,please forgive me.

        But I still think the way to achieve the "dynamic" bitset by template is pretty cool!

        My code

        • »
          »
          »
          »
          »
          9 месяцев назад, # ^ |
          Rev. 3   Проголосовать: нравится +11 Проголосовать: не нравится

          Maybe it's because I used vector to store the tree. Vector is much slower when $$$n\ge 5\times10^5$$$.

          UPD: I changed the way of storing the tree, and it only took 1918ms.

          UPD2: Due to the large amount of input, scanf is very slow. I added input optimizations and the new code only executed to 1201ms.

          • »
            »
            »
            »
            »
            »
            9 месяцев назад, # ^ |
              Проголосовать: нравится +4 Проголосовать: не нравится

            Another optimization:

            Since we need to divide a set of integers into two sets, obviously there must be one set that does not contain the maximum element. So we only need to do the knapsack problem for the other elements. It is a bit like dsu on tree.

  • »
    »
    9 месяцев назад, # ^ |
      Проголосовать: нравится +37 Проголосовать: не нравится

    Here's a stupid solution using std bitset.

    Set the bitset to be doubled in size in advance, and then select the smallest bitset that can be used during operation.

    https://codeforces.com/contest/1856/submission/217355793

  • »
    »
    9 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    You don't need bitset to get accepted. I think that somehow bitset does not make a big difference?

    Custom bitset solution (577ms): 217703571
    No bitset solution (701 ms): 217729087

    But I mean, theoretically speaking, the solution without bitset is $$$O(n\sqrt{n}\log{}n)$$$, which is supposed to get TLE verdict (compare to $$$O(\frac{n\sqrt{n}\log{}n}{64})$$$ of bitset solution). Maybe it's hard to find a countertest.

»
9 месяцев назад, # |
  Проголосовать: нравится +279 Проголосовать: не нравится

Bitsets in authors' solution once again, are they ret...?!?!?!?!?!? (jk)

»
9 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Binary search OP.

»
9 месяцев назад, # |
Rev. 3   Проголосовать: нравится +54 Проголосовать: не нравится

This is my solution for 1856E2 - PermuTree (hard version) in testing.

Code

The complexity seems to be $$$O(n\sqrt{n})$$$, however it pass. In fact, it remain one of the fastest solutions.

There are justifications for why it would be fast:

  • If a subtree at a child is bigger than half of the subtree at the current node, then it does nothing. This is also used in official solution.
  • If a node has at most $$$k$$$ different child. or more precisely, after the compression algorithm, it has at most $$$k$$$ items, then the complexity for that node is bounded by $$$O(2^k)$$$. This is because of the way I implemented the DP.
  • Otherwise a node has at most $$$O(\sqrt{size\ of\ subtree})$$$ items for DP, so the complexity doesn't blow up just from a special node.

I failed to prove that the complexity is lower than $$$O(n\sqrt{n})$$$, and I failed to produce a test case that would have longer run time. Maybe this is due to a lack of trying, but it would be interesting if anyone can prove or hack it.

UPDATED: system test ended, so you can see my submission 217356778

  • »
    »
    9 месяцев назад, # ^ |
      Проголосовать: нравится +13 Проголосовать: не нравится

    I failed to prove that the complexity is lower than $$$O(n\sqrt{n})$$$

    You can't do that. If your root has one subtree of each $$$1, 2, 3, 4, 5 \dots$$$ sizes, it's complexity is already $$$O(n\sqrt{n})$$$ for solving knapsack just for this.

    Btw, I just changed bitset to boolean array in my solution without any additional hacks and time is barely changed at all (421 ms to 452 ms).

    • »
      »
      »
      9 месяцев назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      That's true, but I was thinking more like, I can't prove the constant factor is low enough that it should pass.

      For example, the $$$1, 2, 3, ...$$$ case has a constant factor of like $$$1/3$$$.

  • »
    »
    9 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Can you please explain the thing "compression algorithm"?

    • »
      »
      »
      9 месяцев назад, # ^ |
        Проголосовать: нравится +19 Проголосовать: не нравится

      It's an optimization in knapsack problem, where you only care about whether you can construct a sum.

      In short, if you have $$$k$$$ copies of $$$x$$$, then you can choose from $$$0$$$ to $$$k$$$ copies of $$$x$$$ in your sum. We will transform $$$k$$$ copies of $$$x$$$ into another set of numbers, such that you can still construct from $$$0$$$ to $$$k$$$ copies of $$$x$$$, but we have less number and faster DP.

      The way I did this is that if there are more than $$$2$$$ copies of $$$x$$$, I make $$$x$$$, $$$2x$$$, $$$4x$$$, .... This make it so instead of $$$k$$$ copies of $$$x$$$, I only have $$$O(log(k))$$$ numbers to do DP with.

      I don't know where I learned this trick, I might even have came up with it myself, but you can read more about it and knapsack here: https://codeforces.com/blog/entry/98663

      • »
        »
        »
        »
        9 месяцев назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        Thanks

      • »
        »
        »
        »
        9 месяцев назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        Am i misunderstand something? If there are are $$$k$$$ copies of $$$x$$$ but $$$k$$$ is a power of $$$2$$$, exmaple $$$k=8$$$, if we make $$$x, 2x, 4x$$$, it can't construct $$$8x$$$, but if we make $$$x, 2x, 4x, 8x$$$, we can make something exceed the limit of k. How we do with it.

        • »
          »
          »
          »
          »
          9 месяцев назад, # ^ |
            Проголосовать: нравится 0 Проголосовать: не нравится

          for k=8 we can make it as $$$x,2x,4x,x$$$ if last part is small than power of 2. We let it as it is.

»
9 месяцев назад, # |
  Проголосовать: нравится +21 Проголосовать: не нравится

E1 hint 4 has wrong formatting

»
9 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Problem D using square root decomposition : https://codeforces.com/contest/1856/submission/217356620

  • »
    »
    9 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    I need help to build by debug file I sent msg can you please send me there Thanks in advance:)

»
9 месяцев назад, # |
  Проголосовать: нравится +7 Проголосовать: не нравится

C can be done in N^2 by greedy 217325957

»
9 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Can anyone tell me why my submission (https://codeforces.com/contest/1856/submission/217361629) is failing on test 46?

»
9 месяцев назад, # |
  Проголосовать: нравится +18 Проголосовать: не нравится

Thanks for the nice round and extremely fast editorial (which seems great as it has hints!).

Here is my detailed advice about all the problems:

A
B
C
D
E1
E2
»
9 месяцев назад, # |
  Проголосовать: нравится +13 Проголосовать: не нравится

I thought that E2 was not dp and completely unrelated to E1. Optimization with bitsets never crossed my mind.

»
9 месяцев назад, # |
Rev. 3   Проголосовать: нравится -39 Проголосовать: не нравится

.

»
9 месяцев назад, # |
  Проголосовать: нравится +16 Проголосовать: не нравится

Video-editorial for problem A&B: https://youtu.be/41aWz1C4QJc

thought would be useful

»
9 месяцев назад, # |
Rev. 2   Проголосовать: нравится +1 Проголосовать: не нравится

Can someone please point out what am I missing in my approach of C: 217341446, my answers seem to be off by one from the intended ones.

I'm iterating over all positions from right to left, trying to build a pyramid as high as possible. The first element is bound by its height and height of its neighbor to the right (if there is one). After fixing the starting point I'm going over all elements to the left one by one.

A small example: n = 4, k = 8, a = [4, 3, 1, 3], the answer in this case is 6

An image for this example case

For each half-pyramid I'm trying to fill as much blocks as possible on top (amount of operations left divided by the length of a segment).

If someone could come up with a counterexample, I would greatly appreciate that.

UPD.: Turns out it's a classic implementation (skill) issue, basically I was updating the height of the first element in a segment, but I was still calculating heights of new elements based on the initial height, correct submission: 217377033

  • »
    »
    9 месяцев назад, # ^ |
    Rev. 3   Проголосовать: нравится 0 Проголосовать: не нравится

    your logic is to start from some last position and get the max possible encounter till count==k iterating to i>=0 but how the code solves the sample test case 5 , can you explain? there the edgy case is that we have to just make the 1 to 4 and not 6 that is (6,5,4,1,5)-> (6,5,4,4,5)->(6,5,5,4,5)-> (6,6,5,4,5)->(7,6,5,4,5) , my code is giving wrong result for this test case :(. from my code i always iterate like you but always if a[i]<=a[i+1] exists then i make a[i] to be a[i+1]+1, always hence the answer my code gives for this case is 6. metelkov

    • »
      »
      »
      9 месяцев назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      I have a prev_size variable that tracks the height of the last used element, so starting from i = 3 (zero-based indexing) it is equal to 1, then the next element is 4, which is higher than prev_size + 1, so I need to increase the height of the whole previous segment (in this case only the first element) by 2, and I have enough operations to do that. Then the rest of the elements form a perfect staircase: 6-5-4-3, so I don't need to apply any more operations, but since I have 4 more operations left I can bring this whole segment up by 1: 7-6-5-4.

»
9 месяцев назад, # |
Rev. 3   Проголосовать: нравится +9 Проголосовать: не нравится

Problem C can be easily solved in $$$O(N \log N \log C)$$$.
I forgot the constraint of $$$N$$$ during the contest, so I came up with this solution (217368352).

However, I was too stupid and couldn't implement binary search correctly during the contest.
I finally fixed my solution after the contest. :(

  • »
    »
    9 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    It's possible to speed up this solution to $$$O(N\log N)$$$ by using suffix minimum. My submission: 217428923

    • »
      »
      »
      9 месяцев назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      why suffix minimum ? a bit elaborate pls the logic your code works on

      • »
        »
        »
        »
        9 месяцев назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        $$$suf[i]$$$ equals to first such $$$(j >= i)$$$ that satisfies the following expression : $$$a[j] >= mid - (j - i)$$$. To build this array we need slightly modify our expression, $$$a[j] + j - mid >= i$$$. So, the possible $$$i$$$ for each $$$j$$$ lies in segment $$$[1, a[j] + j - mid]$$$. Thus, it will be correct that $$$suf[i]$$$ is applicable for all indices before $$$i$$$ also. It means that $$$suf[i]$$$ will be either: $$$suf[i + 1]$$$, if $$$a[j] + j - mid > i$$$, or minimal $$$(j >= i)$$$ for which satisfies equation $$$a[j] + j - mid = i$$$ (it can be easily handled with pre-building suffix array). If I was unclear, please tell.

»
9 месяцев назад, # |
Rev. 3   Проголосовать: нравится +75 Проголосовать: не нравится

E can be solved in $$$n \log^3(n)$$$ using FFT.

Consider the sizes of children of each node — say for a node $$$u$$$, the sizes of child node sub-trees are $$$s_1, s_2, ... , s_c$$$. Then, consider the polynomial $$$(1 + x^{s_1})(1 + x^{s_2})\cdots...(1 + x^{s_c})$$$. The coefficients of this polynomial can be computed in $$$n \log(c) \log(n)$$$. Once we have these coefficients, we will find the closest number to $$$n/2$$$ having non-zero coefficient in the final polynomial. We'll only do this when each child node has size smaller than $$$n/2$$$. Thus, we'll need to do this polynomial computation at most $$$\log(n)$$$ times.

this gives overall time complexity of $$$n \log^2(n) * \log(n)$$$

  • »
    »
    9 месяцев назад, # ^ |
      Проголосовать: нравится +10 Проголосовать: не нравится

    good luck trying to make this approach pass TL :)

  • »
    »
    9 месяцев назад, # ^ |
      Проголосовать: нравится +50 Проголосовать: не нравится

    You can remove one log by doing exp(log(1 + x^s1) + log(1 + x^s2) + ...).

    An example of a problem using that trick is https://www.codechef.com/JUNE20A/problems/PPARTS

  • »
    »
    9 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Thus, we'll need to do this polynomial computation at most $$$log(n)$$$ times.

    Could you explain this in more detail?

    • »
      »
      »
      9 месяцев назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      Sorry, let me try to explain what I really meant.

      Consider any node for which we're trying to find the answer right now. If it has a child with size greater than $$$(size\text{_} of\text{_} subtree)/2$$$, then we don't need to perform this computation for that node (the best partition is to put the heaviest node on one side and the rest on the other), and we can directly move to this heaviest node and perform this same computation there. Otherwise, we will need to perform this computation (taking $$$sz \log^2(sz)$$$ computation).

      Then all children of this node have subtree size less than or equal to $$$sz/2$$$, and we'll be performing the same computation for each of them. However, notice that whenever we perform a computation, the maximum size of any sub-tree we would be dealing with would not exceed half of $$$sz$$$.

      Thus, we can think of it as performing computations at different levels. At the first level, the size of each node is less than or equal to $$$sz$$$, then at the second level, the size is less than or equal to $$$sz/2$$$, at the third one, it's less than $$$sz/4$$$ and so on, thus there can be at most $$$\log(sz)$$$ levels, and the computation at each level, when summed over all the nodes at that level, would require at most $$$sz \log^2(sz)$$$ computations, giving $$$sz \log^3 (sz)$$$ overall.

  • »
    »
    9 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    if the max child node size is bigger than n/2,how to deal with the problem?

    • »
      »
      »
      9 месяцев назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      probably just multiply the max child node num with the rest, since you know it'll be optimal

»
9 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Ahh...yet again I was so drained by B(silly mistake),I had no patience for C.

»
9 месяцев назад, # |
Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

how to solve c using dp please

»
9 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Can anyone help me debug my solution for C, I am getting wrong answer for some and specific cases and not able to figure it out.

»
9 месяцев назад, # |
  Проголосовать: нравится +17 Проголосовать: не нравится

video editorial for problem C binary search solution: https://youtu.be/XSsY00NV0ck

»
9 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

can someone please tell me the dp solution of problem C (I know its solution using binary search btw)?

»
9 месяцев назад, # |
  Проголосовать: нравится +11 Проголосовать: не нравится

$$$O(nlogn)$$$ is possible for C using binsearch and two pointers method.

  • »
    »
    9 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Please could you detail a bit more your approach? Thanks by advance.

    • »
      »
      »
      9 месяцев назад, # ^ |
        Проголосовать: нравится +12 Проголосовать: не нравится

      Binary search over answer, let $$$mid$$$ be number we're checking for whether it is obtainable. For each $$$i$$$ we want to find closest to it's right position $$$p[i]$$$ such that $$$arr[p[i]] \ge mid-(p[i]-i)$$$. It's not that hard to see that if $$$i < j$$$ then $$$p[i] \le p[j]$$$, so we can compute $$$p[]$$$ using two pointers. Now all we have to do is to check whether there exists $$$i$$$ such that number of operations to change segment $$$arr[i .. p[i]-1]$$$ into sequence $$$mid, mid-1, .., mid-(p[i]-1-i)$$$ is $$$\le k$$$. The number of operations to do that is equal to the difference of the sums of both sequences.

»
9 месяцев назад, # |
  Проголосовать: нравится +4 Проголосовать: не нравится

In problem E2, does the condition in Hint 2 ( If there is a subtree that is bigger than the sum of the sizes of the other subtrees, you don't have to do knapsack) help in improving the time complexity, or is merely a heurestic? I was able to figure out the Nroot(N) solution but did'nt know about this and the bitset optimisation :(

  • »
    »
    9 месяцев назад, # ^ |
      Проголосовать: нравится +8 Проголосовать: не нравится

    Assuming we're using bitsets of size not larger than twice the size of subtree, the total size of used bitsets will not exceed $$$O(nlogn)$$$ if we consider the idea in Hint 2, otherwise we need $$$O(n^2)$$$ (if we use the same bitset for another dp then we count the space used again). This can be proven by considering HLD of the tree or just small to large argument.

  • »
    »
    9 месяцев назад, # ^ |
      Проголосовать: нравится +18 Проголосовать: не нравится

    It helps because whenever you have to compute it for a subtree of size N, you'll split it into subtrees that sum up to less than N and each individual subtree is of size less than N/2. I think that we can prove a bound like T(N) = O(N^1.5) + 2*T(N/2) which you can use https://www.nayuki.io/page/master-theorem-solver-javascript to calculate the result.

»
9 месяцев назад, # |
Rev. 3   Проголосовать: нравится 0 Проголосовать: не нравится

Hello guys, I have a question about problem B. So basically its about this test case: 12 1 2 1 2 1 3 1 2 1 1 1 2 Why is the output to this test case NO even though it should be YES? (maybe I'm blind, retarded or maybe I just misread the problem) 1 1 1 1 1 1 1 2 2 2 2 3 we change this to: 1 1 1 1 1 1 2 2 2 2 2 2 and then: 2 2 2 2 2 2 1 1 1 1 1 1 sums are the same and a[i]!=b[i] for all indexes

  • »
    »
    9 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    You won't be able to change all the ones. If you reduce all the elements > 1 to 1, sum left to distribute = 7 while the number of ones are 8. So there is atleast one leftover one.

    • »
      »
      »
      9 месяцев назад, # ^ |
      Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

      So the array B needs to be like a copy of array A at the beginning? I only changed one 1 into 2 and reduced the only 3 I have to 2

      • »
        »
        »
        »
        9 месяцев назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        Yes, you can think of it as — you have to change element at every single index of b such that the sum still remains the same.

        • »
          »
          »
          »
          »
          9 месяцев назад, # ^ |
            Проголосовать: нравится 0 Проголосовать: не нравится

          So basically I'm not wrong since that wasn't mentioned in the problem? (or I'm just illiterate and dumb)

»
9 месяцев назад, # |
Rev. 4   Проголосовать: нравится +3 Проголосовать: не нравится

Got bit confused by B during the contest and wasted some time before arriving at really intuitive solution -

Reduce all the elements which are greater than one to 1. Now, for every element x, we have to add x - 1 to leftover elements. We can get the sum over all such x. Let's call it add. Now, we can count all the ones in the array so that we distribute the sum to those ones. In case our add >= countOfOnes, we can easily increment all the ones and hence a new value. In case add is less, there would atleast be one 1 which can't be changed. Answer would be No in that case. And obviously if n = 1, we can't change the value so answer is No.

Spoiler
»
9 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

In problem $$$E1$$$ I tried to do something similar to inorder traversal.

and made $$$p_i$$$ equal to the visit order of node $$$i$$$.

so by doing that, every node that is $$$lca$$$ to at least two nodes will be counted in the answer $$$number \space of \space nodes \space to \space its \space left$$$ $$$*$$$ $$$number \space of \space nodes \space to \space its \space right$$$.

can someone explain why is this wrong.

my submission 217355939.

  • »
    »
    9 месяцев назад, # ^ |
      Проголосовать: нравится +2 Проголосовать: не нравится

    Consider the case where the sizes of children of some node are 1 1 2 2 respectively. Then your solution will count (1+1)*(2+2) = 8 triplets, where optimal solution in this case is to divide the children equally: (1+2)*(1+2) = 9 triplets.

  • »
    »
    9 месяцев назад, # ^ |
    Rev. 2   Проголосовать: нравится +9 Проголосовать: не нравится

    I had the same idea as you at first, and implement it at once. After getting wa on 7, I realized that there was a problem.

    so how to define the visit order on multi-pronged tree? We don't know which son is the first and which is the last. divide these sons seems impossible.

    the right solution is to divide sons into two seperated sets, and make the subtraction of two sets minimum.

    for example, if the node sizes are $$$1,2,3,4$$$, your solution may count $$$(1+2)\times(3+4)=21$$$, but optimal solution is $$$(1+4)\times(2+3)=25$$$, which has the minimum subtraction $$$(1+4)-(2+3)=0$$$ in this case.

    • »
      »
      »
      9 месяцев назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      Can you explain how we ensure that in the optimal solution, the sons themselves are not going to be subdivided?

      As in, why are we not considering possibilities like $$$(1 + 2 + 2) \times (2 + 3)$$$. (I know unoptimal in this case, asking in general).

      • »
        »
        »
        »
        9 месяцев назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        You cannot subdivide the sons. After the division we add all the pairs of form $$$(a, b)$$$ where $$$a$$$ is an item from first group and $$$b$$$ from the second. If we were to split sons $$$v, u$$$ from the same subtree into two groups, then we would also count $$$(v, u)$$$, but we should notice that their lca isn't the root of our current subtree, so we cannot count right now.

        • »
          »
          »
          »
          »
          9 месяцев назад, # ^ |
            Проголосовать: нравится 0 Проголосовать: не нравится

          Sorry if I wasn't clear before, but this is not what I wanted to ask.

          The Hint 1 of E2 says -

          Hint 1

          Could you tell me why this is true? In the solution to E1, this optimization is not used. The solution has iterated over divided subtrees also.

          • »
            »
            »
            »
            »
            »
            9 месяцев назад, # ^ |
              Проголосовать: нравится 0 Проголосовать: не нравится

            It does use this optimization. In dfs we're pushing sons' subtree sizes to vector $$$a$$$ which we later use to do knapsack.

            • »
              »
              »
              »
              »
              »
              »
              9 месяцев назад, # ^ |
                Проголосовать: нравится 0 Проголосовать: не нравится

              I still don't get it. Why are we defining the variables $$$0 \leq b_i \leq s_i$$$ if we are not considering the subtrees being split into group of $$$b_i$$$ and $$$s_i - b_i$$$ themselves?

              If I am not understanding it wrong, in E2 we need to use that $$$b_i = s_i$$$ (or 0) in the optimal solution right?

              Where does that come from?

              • »
                »
                »
                »
                »
                »
                »
                »
                9 месяцев назад, # ^ |
                  Проголосовать: нравится 0 Проголосовать: не нравится

                I don't get where are you getting $$$b_i$$$ from. Both E1 and E2 use the original sons' subtrees sizes.

                • »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  9 месяцев назад, # ^ |
                  Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

                  I am talking about the $$$b_i$$$ s from Tutorial of E1

                  Tutorial to E1

                • »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  9 месяцев назад, # ^ |
                    Проголосовать: нравится 0 Проголосовать: не нравится

                  I've added the tutorial for E2, I think the answer to your original question can be found there.

                • »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  9 месяцев назад, # ^ |
                    Проголосовать: нравится 0 Проголосовать: не нравится

                  This tutorial is so messy that I'm not even going to bother reading it. Easier way to understand this goes as follows: Suppose we're at vertex $$$x$$$ and we want to find maximum number of pairs $$$(v,u)$$$ such that $$$a_v < a_{lca(v,u)} < a_u$$$ and $$$lca(v,u) = x$$$. Let $$$S_1$$$ be set of all vertices $$$v$$$ in subtree of $$$x$$$ such that $$$a_v < a_{lca(v,u)}$$$, analogically for $$$S_2$$$. Let $$$y$$$ be some son of $$$x$$$ and $$$a, b$$$ some vertices in subtree of $$$y$$$. We will prove that it does not make sense to split $$$a$$$ and $$$b$$$ into separate sets. Let $$$s_1$$$ be the size of $$$S_1$$$ without vertices from subtree $$$y$$$, same for $$$s_2$$$. If we put them in separate sets, then their contribution to the answer at $$$x$$$ will be $$$s_1+s_2$$$, where if we were to put them in the same set, then it would be either $$$2s_1$$$ or $$$2s_2$$$, so we should pick the greater one. It is easy to show that $$$s_1+s_2 \le max(2s_1, 2s_2)$$$. So it is always optimal to place all vertices from the same subtree in one of those sets.

»
9 месяцев назад, # |
  Проголосовать: нравится +11 Проголосовать: не нравится

Can someone explain how solving knapsack in $$$O(s*\sqrt{s})$$$ leads to solving E2 in $$$O(n*\sqrt{n})$$$?

During the contest, I thought it was $$$O(n*\log{n}*\sqrt{n})$$$. For a vertice, the sum of weights is the sum of sizes of light children which is bounded by $$$O(n*\log{n})$$$ for the whole tree. The number of items on a vertice is bounded by $$$O(\sqrt{subtree size})$$$.

  • »
    »
    9 месяцев назад, # ^ |
      Проголосовать: нравится +8 Проголосовать: не нравится

    First, prove that this complexity leads to $$$n \cdot (\sqrt{n} + \sqrt{\frac{n}{2}} + \sqrt{\frac{n}{4}} + \sqrt{\frac{n}{8}} + \dots)$$$.

    Then we can observe that $$$\sqrt{n} + \sqrt{\frac{n}{4}} + \sqrt{\frac{n}{16}} + \dots = \sqrt{n} + \frac{\sqrt{n}}{2} + \frac{\sqrt{n}}{4} + \dots < 2 \cdot \sqrt{n}$$$.

    And the same thing we can do with even terms sum ($$$\sqrt{\frac{n}{2}} + \sqrt{\frac{n}{8}} + \sqrt{\frac{n}{32}} + \dots$$$).

    • »
      »
      »
      9 месяцев назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      can you elaborate how complexity lead to the first equation? .

      • »
        »
        »
        »
        9 месяцев назад, # ^ |
          Проголосовать: нравится +16 Проголосовать: не нравится

        Let's say the vertex has level $$$i$$$ if we calculate the knapsack in this vertex and in exactly $$$i$$$ of its ancestors. Because we calculate knapsack only in light child, a subtree size of an $$$i$$$ level root $$$\le \frac{n}{2^i}$$$. And, by definition, subtrees with the same level roots can't be nested. Therefore, the sum of all $$$i$$$ level vertices subtrees $$$\le n$$$ and so the knapsack calculation for all of them together $$$\le n \cdot \sqrt{\frac{n}{2^i}}$$$. This leads to the first equation.

»
9 месяцев назад, # |
Rev. 5   Проголосовать: нравится 0 Проголосовать: не нравится

I updated the provided solution for Problem C Because those portion of the code was creating Confusion.

Updated Code
  • »
    »
    9 месяцев назад, # ^ |
      Проголосовать: нравится +12 Проголосовать: не нравится

    Thanks! By the way, you should put the code in a spoiler, right now it's taking up quite a lot of space.

»
9 месяцев назад, # |
Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

"If there is a very subtree that is bigger than the sum of the sizes of the other subtrees, you don't have to do knapsack."

why does it speedup solution so much ? I tried to solve it without it, but even in the Gym with 15s time limit it gave TLE

is it something similar with small to large ?

»
9 месяцев назад, # |
  Проголосовать: нравится +4 Проголосовать: не нравится

For E1, I iterated on all the lca(u,v). After fixing node, let $$$c_{1},c_{2},c_{3}... c_{k}$$$ be the sizes of subtree rooted at the children of node. Then we divide them into two sets $$$S_{1}, S_{2}$$$ and for that node the answer will be the maximum of the Sum($$$S_{1}$$$)*Sum($$$S_{2}$$$) over all such divisions. This is $$$O(n^{2})$$$.

How can I optimize it for E2.

»
9 месяцев назад, # |
  Проголосовать: нравится +75 Проголосовать: не нравится

I used an approach for problem D that only uses $$$3 \cdot n^2$$$ coins. The idea is similar to the editorial, except you just keep a list of potential candidates for the maximum (initially set to all $$$n$$$ positions). Then you always greedily compare the two closest candidates and remove one of them.

Code is 217298513 as well as a YouTube video explanation here.

Note that in the video I only prove $$$\frac{\pi^2}{3} \cdot n^2 \approx 3.29 \cdot n^2$$$, but you can actually prove that it's $$$3 \cdot n^2$$$.

»
9 месяцев назад, # |
  Проголосовать: нравится -44 Проголосовать: не нравится

bitset in author's solution. why??????

»
9 месяцев назад, # |
  Проголосовать: нравится +18 Проголосовать: не нравится

Very cool task E2, thank you!

»
9 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

how to do hint 3 in problem E2?

»
9 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

I encountered weird bug while solving E2 and I can't understand what is wrong. This 217392127 passes freely, whereas this one 217392094 gets RE. Notice that the only difference between those codes is one commented out line (which is some bitset operation); the line is inside else statement, which I intentionally made to never execute. The RE occurs on line (shaped) graph (notice that on such graph DFS always returns before running knapsack part). Any ideas why one solution works and the other doesn't?

»
9 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

It seems that the time complexity of the official solution for E2 is $$$O(\frac{n\sqrt(n)\log n}{\omega})$$$, as same as My submission's.Why I got TLE?

  • »
    »
    9 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится
    int k = 1;
    while (k <= cnt) {
    	bst = (bst << (k * vec[i])) | (bst >> (k * vec[i]));
    	cnt -= k;
    }
    

    You don't change k here, so it is always equal to $$$1$$$.

»
9 месяцев назад, # |
  Проголосовать: нравится -7 Проголосовать: не нравится

Can Somebody suggest me free resources to reach specialist

»
9 месяцев назад, # |
  Проголосовать: нравится +1 Проголосовать: не нравится

Could someone please teach me how to prove complexity in E2?

»
9 месяцев назад, # |
  Проголосовать: нравится +19 Проголосовать: не нравится

These are pretty good problems.D,E1 and E2 are challenging

»
9 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Good problem C!But I use an $$$O(n^3)$$$ way to solve it.

This is my code:

#include<bits/stdc++.h>
using namespace std;
const int N=1e3+5;
int T,n,k,a[N];
inline void solve(){
	int ans=0;
	for(int l=1;l<=n;++l)
		for(int r=l;r<=n;++r){
			int mx=0,s=0;
			for(int i=r,t=0;i>=l;--i,++t)
				mx=max(mx,a[i]-(a[r]+t));
			if(mx&&(r==n||a[r]+mx>a[r+1]+1))
				continue;
			bool flg=0;
			for(int i=r,t=mx;i>=l;--i,++t){
				s+=a[r]+t-a[i];
				if(s>k){
					flg=1;
					break;
				}
			}
			if(!flg)
				ans=max(ans,a[r]+mx+r-l+min(max(0,a[r+1]+1-(a[r]+mx)),(k-s)/(r-l+1)));
		}
	cout<<ans<<endl;
}
int main(){
	ios::sync_with_stdio(0);
    cin.tie(0);
    cout.tie(0);
    cin>>T;
    while(T--){
    	cin>>n>>k;
    	for(int i=1;i<=n;++i)
    		cin>>a[i];
    	solve();
    	for(int i=1;i<=n;++i)
    		a[i]=0;
	}
	return 0;
}
»
9 месяцев назад, # |
  Проголосовать: нравится +1 Проголосовать: не нравится

the "dynamic" subset using std::bitset trick in the solution of is really cool!

»
9 месяцев назад, # |
Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

Why doesn't Problem B consider right boundary? e.g. 1 3 999999999 999999999 1000000000

»
9 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Isn't the dp part in E1 has time complexity $$$n*sum$$$ which is $$$O(n^{3})$$$, Can someone explain how it's $$$n^{2}$$$ ?

  • »
    »
    9 месяцев назад, # ^ |
      Проголосовать: нравится +3 Проголосовать: не нравится

    Let the number of children of the $$${ith}$$$ node be denoted as $$${n_i}$$$. Now, $$$\sum_{i=1}^{i=n} {n_i = n - 1}$$$. This is because every child has only one parent, which further means that every node will only be counted once in any of $$${n_i \forall i \in [1, n]}$$$.

    Now, the dp in question will basically run (for every node $$${i}$$$) on $$${n_i}$$$ nodes. And as it will run for every node $$${i}$$$, we get total time taken = $$$\sum_{i=1}^{i=n} {O({n_i}*sum)}$$$ = $$${O(n * sum)}$$$ = $$${O({n^2})}$$$

  • »
    »
    9 месяцев назад, # ^ |
      Проголосовать: нравится +8 Проголосовать: не нравится

    You can read the #7 of this blog

»
9 месяцев назад, # |
  Проголосовать: нравится +1 Проголосовать: не нравится

Has a similar problem to C occured before in a contest. I couldn't think of how to aplly binary search in this question.

»
9 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

I got impressed with problem C, earlier it appeared easy to me but when I started solving I understood that it was not that easy, I tried my best but I was able to solve only A and B.

Today I saw the tutorial and really with this problem and its tutorial I learned something new.

I just want to thank the author for such a nice problem C.

»
9 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

https://codeforces.com/contest/1856/submission/217372842 What wrong with my solution for C isn't dynamic programming enough for solving this problem?

»
9 месяцев назад, # |
  Проголосовать: нравится +17 Проголосовать: не нравится

Can someone explain me the $$$\text{dp}$$$ states in $$$\text{E1}$$$?

It seems currently the Tutorial is not available for $$$\text{E1}$$$ and $$$\text{E2}$$$.

»
9 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Can anybody help me please ,I am able to understand that in E2 we have used bitsets to optmise knapsack code ,But i am not able to understand purpose of below code . int pi = 0; for (int i = 1; i <= n; i++) { if (a[i] != a[i — 1]) { ll cnt = i — pi; ll x = a[i — 1];

ll j = 1;
        while (j < cnt) {
            b.push_back(x * j);
            cnt -= j;
            j *= 2;
        }            
        b.push_back(x * cnt);

        pi = i;
    }
}
»
9 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

In B problem, I replaced 1 with 2 and all the other values I replaced with 1(the last element will be set in a way such that the sum remains the same), why doesn't this work?

  • »
    »
    9 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    There are cases when you cannot replace all 1's with 2. Are you handling that?

    • »
      »
      »
      9 месяцев назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      I just understood the solution(by proving it on pen and paper), so now I understand where I went wrong. Thanks :)

»
9 месяцев назад, # |
  Проголосовать: нравится +11 Проголосовать: не нравится

Finally Specialist...!!! Yayy

»
9 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Can anyone share ideas on how to solve C in O(n^2) using dp ??

»
9 месяцев назад, # |
Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

In E1, it seems obviously that all of the values allocated to a specific subtree are larger or smaller than the value of LCA. But by allocating both larger numbers and smaller numbers to a subtree, the product will become higher and we need to exclude the pairs in this subtree. Can anyone proof that the final answer won't become larger? Thanks a lot.

  • »
    »
    9 месяцев назад, # ^ |
      Проголосовать: нравится +3 Проголосовать: не нравится

    I'm wondering the same thing, however in E1 the model solution uses DP where they try all distributions.

    • »
      »
      »
      9 месяцев назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      Thanks, I didn't notice that.

      But in this code, I did not enumerate the number of large and small numbers assigned to the subtree, and I did not consider the situation that the large and small numbers are in the same subtree when counting the results, and I got Accepted.

      So is this the right way to solve the problem or is the input set not strong enough?

      • »
        »
        »
        »
        9 месяцев назад, # ^ |
          Проголосовать: нравится +8 Проголосовать: не нравится

        No, it's right, E2 uses this. I don't think you are the only one who didn't bother proving it before submitting.

  • »
    »
    9 месяцев назад, # ^ |
      Проголосовать: нравится +5 Проголосовать: не нравится

    Here for someone to prove it later, I am also stuck trying to proof this claim.

  • »
    »
    9 месяцев назад, # ^ |
    Rev. 2   Проголосовать: нравится +11 Проголосовать: не нравится

    Assume the LCA is node $$$x$$$ which has $$$n$$$ children and value $$$x_v$$$, and you have assigned $$$a_i$$$ nodes of the $$$i^{th}$$$ child subtree values smaller than $$$x_v$$$ and assigned $$$b_i$$$ nodes values larger than $$$x_v$$$, the contribution of the $$$i^{th}$$$ subtree will be $$$a_i\cdot \sum_{j=1}^{j=n}{b_j}+b_i\cdot \sum_{j=1}^{j=n}{a_j}$$$ where $$$j\neq i$$$.

    Now if we decide to merge the part which is multiplied by $$$min(\sum_{j=1}^{j=n}{a_j}, \sum_{j=1}^{j=n}{b_j})$$$ with the other part, this will affect the contribution by: $$$ChoosenPart\cdot (max(\sum_{j=1}^{j=n}{a_j}, \sum_{j=1}^{j=n}{b_j}) - min(\sum_{j=1}^{j=n}{a_j}, \sum_{j=1}^{j=n}{b_j}))$$$, which means contribution didn't decrease.

    • »
      »
      »
      9 месяцев назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      If I understand correctly, when you fix other parts and change the distribution of values in a subtree, the contribution of that subtree will indeed reach the maximum or minimum value when it comes to all small or large numbers. However, this contribution is not independent, as it may reduce the contribution of other subtrees.

      • »
        »
        »
        »
        9 месяцев назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        We can start the proof from the upper subtrees, this will not affect the lower subtrees as any set of values allocated to the lower subtrees can be arranged appropriately to achieve some arrangement. Now in the lower subtrees the change will not affect the upper subtrees as the values in any subtree are all less or larger than any upper level LCA.

        • »
          »
          »
          »
          »
          9 месяцев назад, # ^ |
            Проголосовать: нравится 0 Проголосовать: не нравится

          It is correct for contributions between different levels to be independent of each other, but what I want to say is that the contributions of subtrees within the same level will affect each other. For example, if you change the a_i of a subtree to get a higher contribution, it will feedback on the contribution of another subtree j in the same level, as the its contribution will sum a_i.

          • »
            »
            »
            »
            »
            »
            9 месяцев назад, # ^ |
              Проголосовать: нравится +16 Проголосовать: не нравится

            Note that when I calculated the contribution of the $$$i^{th}$$$ subtree, I calculated its whole contribution (of both $$$a_i$$$ and $$$b_i$$$), so the contributions of other subtrees within the same level should not consider $$$i$$$ at all.

»
9 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

DP solution for problem C incase anybody needs it. I have used binary search to finally filter out the value. Incase any further explanation is required, feel free to ask, I'll try my best to explain

»
9 месяцев назад, # |
Rev. 3   Проголосовать: нравится +19 Проголосовать: не нравится

tutor for E1/E2 when?

»
9 месяцев назад, # |
Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

For A, a slick way to do it is:

Spoiler

$$$O(n\log n)$$$, of course.

»
9 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

For problem E2, I think my code works the same as the one in the editorial but it keeps on getting TLE on testcase 5. Can anyone help me? 217457280

  • »
    »
    9 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    The problem with your code is that you're using maxsize bitset for every dp. The way around this is to use templates to generate solve functions with bitsets with sizes of powers of 2. This trick is used in editorial solution.

    • »
      »
      »
      9 месяцев назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      Thank you so much for your help!

      Why does bitset with maxn take much longer even though I'm not using all of the bits for every calculation?

»
9 месяцев назад, # |
  Проголосовать: нравится +8 Проголосовать: не нравится

I have solved $$$E1$$$ using a different DP, but I spent some time to understand what the DP used in the editorial's solution means, and here is what I got (note that I used some variable names from the code):

For a node $$$v$$$ with $$$n$$$ children, we make $$$n$$$ iterations, in the $$$j^{th}$$$ iteration we add the $$$j^{th}$$$ child subtree, which has a size $$$x$$$. $$$cs$$$ is the sum of sizes of the previous subtrees excluding the $$$j^{th}$$$.

In the $$$j^{th}$$$ iteration, we calculate $$$dp[i]$$$ which is, using the first $$$j$$$ subtrees, $$$dp[i]$$$ is the maximum number of good pairs having LCA $$$v$$$, if we have $$$i$$$ nodes with values less than $$$v$$$'s value, where $$$pr$$$ from them are chosen from $$$cs$$$.

»
9 месяцев назад, # |
Rev. 3   Проголосовать: нравится 0 Проголосовать: не нравится

A can be solved in O(N) by checking for the largest element which is larger that its consecutive element

»
9 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

can someone please tell me, why my java this java code wont accept , but if i wrote this on c++ its got accepted https://codeforces.com/contest/1856/submission/217493262 please check my submissions, i am really glad if someone tell me

»
9 месяцев назад, # |
  Проголосовать: нравится +27 Проголосовать: не нравится

Here's my solution for E2 without using any bitset optimization or any compression, just using the fact that there are atmost around sqrt(2*subtree_sum) distinct numbers which sum upto subtree_sum. I am wondering how this solution passes in just 1310 ms.

Code
»
9 месяцев назад, # |
Rev. 2   Проголосовать: нравится +3 Проголосовать: не нравится

Problem E2:

Let $$$d_1,d_2,…,d_p$$$ be the sizes of the subtrees of vertices in $$$L_k$$$ . Then we need to maximize the value of: $$$d_1\sqrt{d_1}+d_2\sqrt{d_2}+…+d_p\sqrt{d_p}$$$ over all $$$d_1,d_2,…,d_p$$$ such that $$$1≤d_i≤\frac{n}{2 ^ k}$$$ and $$$d_1+…+d_n≤n$$$ .

Can't we just prove that: $$$d_1\sqrt{d_1}+d_2\sqrt{d_2}+…+d_p\sqrt{d_p} \leq d_1\sqrt{\frac{n}{2 ^ k}}+d_2\sqrt{\frac{n}{2 ^ k}}+…+d_p\sqrt{\frac{n}{2 ^ k}} = \sqrt{\frac{n}{2 ^ k}} (d_1 + d_2 + ... + d_p) \leq \sqrt{\frac{n}{2 ^ k}}\cdot n = \sqrt{t} \cdot n$$$

»
9 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

why does a code with lambda expressions is taking more space? 218007219 and 218004946 are exactly same codes but in one of them I used lambda function instead of ordinary function and it gives me stack overflow for large input. All I know is after function is executed it is cleared from stack, same happens with lambdas. Moreover the code runs for O(n sqrt(n)) complexity !!

»
9 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Has anyone managed to solve E2 with recursive dfs?

218054678

I eventually solved it without dfs, but I wonder if dfs always stackoverflows for graph problems with n=1e6?

»
9 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

I do not understand why i get TLE on problem E2. Nothing seems wrong.

Here is my code: 218196327

  • »
    »
    9 месяцев назад, # ^ |
    Rev. 2   Проголосовать: нравится +1 Проголосовать: не нравится

    You can't use a bitset of length $$$n$$$ for every vertex, it makes the solution run in $$$O(\frac{n^2}{w})$$$. For example, on a tree with a lot of vertices that have three subtrees of sizes 2, 3, and 3.

»
9 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

D could also possibility be solved with a more complex way observing 2000^(1/16) <= 2 then keep partitioning the arrays into sqaure root sizes.

»
9 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Why is dp[i][B] the array's m⋅(Sm+1) values taken into account of the time complexity of the problem E1?

  • »
    »
    9 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Because allocating the $$$dp$$$ array (or if you are using a global array, setting some values to zero) takes time, though in this case, it doesn't affect the complexity.

»
9 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

why not g[i].push_back(x - 1) when adding edges?

»
9 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Can Somebody Tell me in A:- int cnt=0; for(int i=n-1 ;i>0 ;i--){ if(arr[i]<arr[i-1]){ cnt+=Math.abs(arr[i-1]-arr[i]); arr[i-1]=arr[i]; } } Why This is Not Correct Soln??

»
8 месяцев назад, # |
Rev. 3   Проголосовать: нравится +8 Проголосовать: не нравится

Intuitive DP for C:

Let $$$dp_{i, j, k}$$$ represent if it is possible to achieve value $$$j$$$ at index $$$i$$$ using $$$k$$$ increases (this is separate from $$$k$$$ as defined in the problem). Based on the problem statement, for $$$a_i$$$ to increase to $$$j$$$, $$$a_{i+1} \geq j-1$$$. And to increase $$$a_i$$$ to $$$j$$$, you must use $$$j-a_i$$$ increases. This gives the recurrence:

$$$ dp_{i, j, k} = dp_{i+1, j-1, k-(j-a_i)} $$$

For each individual $$$i$$$, $$$j$$$ can be binary searched. It's easy to see that the recursion depth is at most $$$n$$$, so the total complexity is quadratic and an insignificant log factor from binary searching.

»
7 месяцев назад, # |
Rev. 5   Проголосовать: нравится +8 Проголосовать: не нравится

An O(n) solution to C: https://codeforces.com/contest/1856/submission/229939313

For any $$$i$$$, assume $$$i$$$ is the highest. Then $$$a[i+j]=a[i]-j$$$ for $$$i\le i+j\le r_i$$$. If it's possible to convert both $$$(a,b)$$$ and $$$(c,d)$$$, and $$$a\le c\le d\le b$$$, then it's always better to choose $$$(a,b)$$$. So you only care about values of $$$r_i>\max(r_{i-1})$$$, so you can use two pointers.

c c 2c problem c c. p3