SlavicG's blog

By SlavicG, history, 23 months ago, In English

Thank you for participating!

1676A - Счастливый?

Idea: mesanu and SlavicG

Tutorial
Solution

1676B - Равенство конфет

Idea: Errichto

Tutorial
Solution

1676C - Самые похожие слова

Idea: MikeMirzayanov

Tutorial
Solution

1676D - X-сумма

Idea: mesanu

Tutorial
Solution

1676E - Запросы на поедание конфет

Idea: mesanu

Tutorial
Solution

1676F - Самый длинный страйк

Idea: MikeMirzayanov

Tutorial
Solution

1676G - Чёрно-белые сбалансированные поддеревья

Idea: MikeMirzayanov

Tutorial
Solution

1676H1 - Максимизация пересечений (простая версия)

Idea: flamestorm

Tutorial
Solution

1676H2 - Максимизация пересечений (сложная версия)

Idea: flamestorm

Tutorial
Solution
  • Vote: I like it
  • +131
  • Vote: I do not like it

| Write comment?
»
23 months ago, # |
Rev. 2   Vote: I like it +25 Vote: I do not like it

It was honor to test this contest.Thanks for the Great round.Also great blog thanks for helping the community.

»
23 months ago, # |
  Vote: I like it +5 Vote: I do not like it

Thanks for the effort, much appreciated!

»
23 months ago, # |
  Vote: I like it 0 Vote: I do not like it

I had another solution for 1676G - White-Black Balanced Subtrees — I used DFS to recursively compute all vertices' values, saved them in an array and then just scanning the array for the vertices that have the equal number of white and black vertices in their subtrees. Maybe this is an easier explanation not containing DP clearly.

  • »
    »
    23 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    This approach is DP as well as you are finding and storing all the values for black and white vertices for each subtree and then later only just iterating over all vertices to check which have equal no. of white and black vertices in their subtree. Actually it is quite similar to what is being done in the editorial as well except the fact that they are just finding number of balanced subtrees while doing dfs only and you are just running a separate loop after the dfs.

»
23 months ago, # |
  Vote: I like it 0 Vote: I do not like it

I really threw on C and D cuz I'm bad at determining time complexity.

  • »
    »
    23 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    The same with D

    • »
      »
      »
      23 months ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      For C, I calculated the number of operations as (100 * 50 * 8)^2 instead of (50 * 8)^2 * 100. For D, I assumed that the intended solution was actually O(n^2)

»
23 months ago, # |
  Vote: I like it +2 Vote: I do not like it

my first unrated round and I have solved all of the problem in the contest time. happy coding :) problems were so interesting.

  • »
    »
    23 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    wow..i saw ur profile if ur rating was just 12 less, your rating would go like a slingshot , just a little back,,,and so much further.

    • »
      »
      »
      23 months ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Yes you are right.. but I enjoyed this round. :)

»
23 months ago, # |
  Vote: I like it -16 Vote: I do not like it

The Time complexity of Problem D O(qlogn+n). But If we calculate for the worst-case according to given constraints then it should be -: 10^7 * 10^3 + 10^7. Also you are not considering the loop for test cases, so according to me its T.C should be. O(T*N*qlogN) However this T.C also not fast. So I am getting this doubt. Please correct me if I am wrong.

  • »
    »
    23 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    I guess you are talking about problem E but not D.

    It is guaranteed $$$\sum n$$$ and $$$\sum q$$$ over all test cases don't exceed $$$1.5\times10^5$$$, so there's no need to multiply $$$T$$$ when calculating the time complexity.

    On this condition, the time complexity is $$$O(q\log n)\approx 1.5\times 10^5\times 17 = 2.6\times 10^6$$$, it's able to pass easily under the 3.5 seconds time limit.

»
23 months ago, # |
  Vote: I like it +1 Vote: I do not like it

Great Round, I am really excited to see my new cyan color.

»
23 months ago, # |
  Vote: I like it +4 Vote: I do not like it

E, F and H2 are really good problems.

Nice Div. 4!

»
23 months ago, # |
  Vote: I like it +3 Vote: I do not like it

problem h2 : Has anyone used ordered_set (multiset using pair) (pbds) ??

»
23 months ago, # |
Rev. 2   Vote: I like it +3 Vote: I do not like it

Can someone please help me to understand the ordered_set solution to problem H2? I understand that "order_of_key(k)" returns the number of elements strictly less than k. However, I have seen many solutions that use an ordered_set with a pair to find the number of elements i<j && a[i] >= a[j], like this one:

ordered_set<ll>st;
    for(i = n;i>0;i--)
    {
        ans+=(st.order_of_key({a[i], inf}));
        st.insert({a[i],i});
    }
    cout<<ans<<"\n";

Why does order_of_key(a[i]) not work(to me, it seems that the index is irrelevant), and what is the functionality of pairs in an ordered_set? Any help would be appreciated.

  • »
    »
    23 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    To what I can understand, people use pairs so that they do not have to worry about equal values. Since we need to find inversion that might be equal, it is a good idea to use pair<int,int> where every value had its unique index and we can get out answers simply by using order_of_key function call. You can also do this without using pairs, just use greater_equal<int> in your ordered_set typedef and use a map to store equal values.

»
23 months ago, # |
  Vote: I like it +6 Vote: I do not like it

I loved the round! The problems all had short and clean implementations, and I had a good take taking the contest.

»
23 months ago, # |
  Vote: I like it +3 Vote: I do not like it

thanks a lot for the round. It was great. hope to have many more Div4 rounds like this.

»
23 months ago, # |
  Vote: I like it +91 Vote: I do not like it

»
23 months ago, # |
  Vote: I like it 0 Vote: I do not like it

Can someone please explain how the BIT approach works for H2? Or maybe redirect me to some article which explains it well. Thanks.

  • »
    »
    23 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    My BIT approach for H2:156714981

  • »
    »
    23 months ago, # ^ |
      Vote: I like it +3 Vote: I do not like it

    so you have to find ai >= aj for i < j, so if you know BIT than we can process range sum queries so, at each step we will calcumlate, rangesum(a[j], max(a)) = this will give count of all elements greater than j that you hav eupdated in BIT, after doing this you will inccrease count of a[j] in BIT

  • »
    »
    23 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it
»
23 months ago, # |
Rev. 2   Vote: I like it 0 Vote: I do not like it

Editorial solution for F in python TLE's

»
23 months ago, # |
Rev. 2   Vote: I like it 0 Vote: I do not like it

Hi, can someone explain why my nlogn solution failed for problem F. It got accepted during the contest and even after the hacking phase finished it was accepted. But now when I checked (14 hours) after the contest it suddenly got TLE on 18th test case.

  • »
    »
    23 months ago, # ^ |
      Vote: I like it +3 Vote: I do not like it

    It's because after the hacking phase is over all the solution are again tested with the test case used in hacks and so is yours, the main reason the worst time complexity is not nlogn it n^2, because the c++ unordered map uses linear data structure in case of collision to search for the elements , ideally it should be nlogn if instead of linear data structure, AVL tree is used which is the case with Java

    • »
      »
      »
      23 months ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Thank you for the explanation. So is it not suggested to use unordered_map? I usually avoid using ordered map for better time complexity, but if unordered_map is not consistent, should I just stick with ordered map or any other alternative?

      • »
        »
        »
        »
        23 months ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        Try to avoid them as much as you can but use ordered map always if Time complexity allows

»
23 months ago, # |
  Vote: I like it 0 Vote: I do not like it

Can someone please tell me why its giving tle its very similar to whats given in tutorial here is my submission https://codeforces.com/contest/1676/submission/156791528

  • »
    »
    23 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    You are getting TLE because rather than iterating over the n values in the array your loop is iterating over all values in mini to maxi which can be very large of the range 10^9 as it is given in the constraints that 1 <= a[i] <= 10^9

»
23 months ago, # |
  Vote: I like it +10 Vote: I do not like it

If you use python and get TLE in problem F, I suggest you read this blog.

https://codeforces.com/blog/entry/101817

»
23 months ago, # |
  Vote: I like it 0 Vote: I do not like it

Can someone explain F? I got confused during the contest.

  • »
    »
    23 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    you have to output a range L, R

    such that every element between L, R has count greater than equal to k, if there is no element in array its count is considered zero

»
23 months ago, # |
  Vote: I like it 0 Vote: I do not like it

orz

»
23 months ago, # |
  Vote: I like it 0 Vote: I do not like it

Can somebody help me with my Solution for question F. It got accepted last night during the contest but it was rejected in the system checking stating that it is giving TLE for Test case 18. My Solution for F

I am basically doing exactly what is mentioned in the editorial but mine is getting TLE.

»
23 months ago, # |
  Vote: I like it 0 Vote: I do not like it

I have experienced some shocking results in F problem where if I use unordered_map it gives me TLE but on using map its working fine! Am I missing something?

  • »
    »
    23 months ago, # ^ |
    Rev. 2   Vote: I like it +3 Vote: I do not like it

    When the value of n is large (usually larger than 10^5) then because there can be high number of collisions so hashmaps gives its worst case time complexity for insertion operations that is O(n) and not O(1) which is why the overall time complexity becomes O(n^2).

    The solution is to use a map as there we don't have a problem of collisions and the time complexity is logn per operation.

»
23 months ago, # |
  Vote: I like it 0 Vote: I do not like it

Please can you tell me what my submission giving TLE ?

Submission

»
23 months ago, # |
  Vote: I like it 0 Vote: I do not like it

Hi mesanu, for D. X-Sum my python code got TLE. I looked up the editorial, it looked similar to mine. Is it a language problem (like I have to use C++ rather than python) or I am missing something in my code?

My submission Id: 156738601

Thanks!

  • »
    »
    23 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    You don't have to use c++. Try submitting in pypy 3, it is way faster.

»
23 months ago, # |
  Vote: I like it +1 Vote: I do not like it

The time complexity of F is not O(n) as mentioned in the solution, its O(n * log(n)) since we need to sort the array.

»
23 months ago, # |
  Vote: I like it 0 Vote: I do not like it

The g question why java is used in the tle code as follows https://codeforces.com/contest/1676/submission/156893634

»
22 months ago, # |
  Vote: I like it 0 Vote: I do not like it

in ques D, why you did num-=ar[i][j]*3

  • »
    »
    22 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    cause you count it 4 times in 4 ( top right, top left, bottom left, bottom right ) diagonals.

»
22 months ago, # |
  Vote: I like it 0 Vote: I do not like it

Segment Tree for div4 round???????

»
22 months ago, # |
  Vote: I like it 0 Vote: I do not like it

Still getting TLE in Problem E even after applying lower bound binary search. Solution: https://codeforces.com/contest/1676/submission/159866893

  • »
    »
    21 month(s) ago, # ^ |
    Rev. 3   Vote: I like it 0 Vote: I do not like it

    You are sorting array and calculating sums in each query. That is the reason you are getting TLE. You can do that before first query, since those values will be same every query. See my submission.

»
20 months ago, # |
  Vote: I like it 0 Vote: I do not like it

For F, finding the longest subarray of the good values array takes O(n) time. However, we must sort the array in advance which will be a O(nlogn) operation. I think that the overall time complexity should be O(nlogn) instead of O(n). Correct me if I am wrong.

»
2 months ago, # |
  Vote: I like it 0 Vote: I do not like it

A thing to point out: There was no need to sort the array in H2 again. It was already sorted by Merge Sort.