### chromate00's blog

By chromate00, 11 days ago,

Well, it's my first time posting this type of blog on codeforces. Most of the time I could either find help elsewhere or find the answer myself, but this time it was different. I could not find any resource related to this problem, given that the discussion page for this Opencup contest is a blank russian blog from 2015 that I do not want to necropost on, and I do not know anyone who has solved this problem personally. The problem is here on Baekjoon OJ, and I would like to share my approach so far before asking the question, so that we can improve over the current approach.

Current Approach
Code based on approach

Now here's the issue: I am still getting a WA verdict on the problem. This might mean that I need to prove a tighter bound possible, however I am unable to improve it further than the current status of the approach. Can anyone help me on this problem? Is there anyone here who have solved this problem personally and/or during the contest?

• +2

By chromate00, 2 weeks ago,

Before we get to the point, I kindly ask you to read the previous part of the blog if you haven't already. It contains a lot of the context we will be speaking of.

So, on the previous half of the blog, I explained the basics of named requirements, and promised to demonstrate implementing a working class based on the RandomNumberEngine requirement. It took some time (due to life & stuff), but here it is now. Here I explain the process of implementing Lehmer64, following the RandomNumberEngine requirement.

First, I read carefully the references for the RandomNumberEngine requirement. Reading these requirements carefully before implementing can prevent many mistakes, so you may as well read the requirements before reading the rest of the blog.

The concise description on the top of Requirements provides a very important information, not present in the table below. It is as follows.

A type E satisfying UniformRandomBitGenerator will additionally satisfy RandomNumberEngine if...

This means that, for a type to meet the requirements for RandomNumberEngine, it must meet UniformRandomBitGenerator first. Therefore, I implemented the requirementd for UniformRandomBitGenerator first. This adds 3 lines of code. (One requirement coincides with RandomNumberEngine)

using result_type=uint64_t;
constexpr static uint64_t min(){return 0ull;}
constexpr static uint64_t max(){return -1ull;}


Now that we have the three functions, we can implement the functions needed for RandomNumberEngine. First, I started off with the two constructors and seed functions, seed() and seed(s). The former is basically initializing the RNG with a default seed, the latter is about initializing the RNG with an arbitrary (user-given) seed. I defined the default seed as the maximum value for an unsigned 64-bit integer. However, one issue was that Lehmer64 uses a 128-bit state. Therefore, I had to change the seed to a 128-bit integer with splitmix64 and some bitmasks. Here are the members I added.

uint64_t sm64(uint64_t x,uint64_t n)
{
x+=n*0x9e3779b97f4a7c15ull;
x=(x^x>>30)*0xbf58476d1ce4e5b9ull;
x=(x^x>>27)*0x94d049bb133111ebull;
return x^x>>31;
}
const static uint64_t def=-1;
Lehmer64():state(state_t(sm64(def,1))<<64|sm64(def,2)){}
Lehmer64(uint64_t seed):state(state_t(sm64(seed,1))<<64|sm64(seed,2)){}
Lehmer64(const Lehmer64& a):state(a.state){}
void seed(){state=state_t(sm64(def,1))<<64|sm64(def,2);}
void seed(uint64_t seed){state=state_t(sm64(seed,1))<<64|sm64(seed,2);}


After this, we need to implement the seed(q) function and its corresponding constructor. The q in seed(q) is defined above, as "q, a lvalue of some type satisfying SeedSequence". SeedSequence here, is another requirement. The only member function of SeedSequence we need to know here, though, would be generate(rb,re). In the reference for SeedSequence, there is a description of this member function.

Fills [rb,re) with 32-bit quantities depending on the initial supplied values and potential previous calls to generate. If rb == re, it does nothing.

So, this is a simple function filling a range with psuedo-random 32-bit unsigned integers. Knowing this, I made an union type of four 32-bit unsigned integers and one 128-bit unsigned integer. This is a lazy way to convert the generated 32-bit integers to one 128-bit integer (in terms of raw bits). After that, I used that union type and wrote the function.

union lz{uint32_t st[4];state_t stt;};
template<class Sseq>
Lehmer64(Sseq& q){lz k;q.generate(k.st,k.st+4);state=k.stt;}
template<class Sseq>
void seed(Sseq& q){lz k;q.generate(k.st,k.st+4);state=k.stt;}


Now we finished 7 functions out of 13 already. For the rest, we can follow the detailed implementations of Lehmer64, or the description of the functions. Here are the other functions I added finally.

uint64_t operator()(){state*=mult;return state>>64;}
template<class T>
T pow(T a,uint64_t b){T z=1;do{if(b&1)z*=a;a*=a;}while(b/=2);return z;}
bool operator==(const Lehmer64& o){return state==o.state;}
bool operator!=(const Lehmer64& o){return state!=o.state;}

template<class os>
os& operator<<(os& ost,const Lehmer64& L){ost<<uint64_t(L.state>>64)<<" "<<uint64_t(L.state);return ost;}
template<class is>
is& operator>>(is& ist,Lehmer64& L){uint64_t a,b;ist>>a>>b;L.state=a;L.state<<=64;L.state|=b;return ist;}


(pow here exists just for binary exponentiation, needed for the discard function, as I did not want the discard function to be $O(n)$. Also note that the operator overloads for >> and << exist outside the class.)

Here is the final code after merging everything to a working class.

Code

Of course, it may take some time if you are not experienced in structured coding, to write code based on complex requirements. Still, understanding named requirements is very important, you will need them sometime in your CP experience as you advance further. I hope this helps in the process of understanding these named requirements. Please post your questions below if you need any resolutions on the explanation (or understanding any named requirement!)

• +7

By chromate00, 4 weeks ago,

Today I encountered a very strange behaviour. A comment of mine had its vote status changed, but I realized that the original post where it was had already been removed (or hidden, left as a draft) before the change. In this state, noone should be able to see the original post, or only the OP should be able to, if it were just changed back to a draft. Therefore, the vote status cannot possibly change multiple times, as the only one who can possibly access the post is the OP. So I came to think, is the system account manipulating the vote status? Of course, there are unknown parameters that affect the contribution score, and I don't really mind that. I don't really mind my contribution dropping either. (actually I would tolerate it dropping to somewhere near Sparky_Master_WCH1226's score even) Still, I think the contribution system, or at least the vote system, should be as transparent as possible, reflecting actual votes made by actual human beings.

Hence, the question. Is the vote system rigged?

• -50

By chromate00, 2 months ago,

Just today, I came up with this trick while solving a problem that (originally) required a merge-sort tree. While I knew that it could be solved by a merge-sort tree, I thought implementing it would be relatively tedious. This being said, I do not mean to say that I do not like the data structure, I think its concepts are very clever. However, I do like algorithms whose implementation is very elegant. For this reason, I prefer the fenwick tree over the segment tree on many problems. This was another case of me considering how a fenwick tree could replace the usual implementation. And just today the idea came to me, which turned into a method which could almost replace merge-sort trees. (Note that some people may have found this earlier than I did, and I think most people who did would have found this independently like I did too)

So, as a disclaimer before explaining the method, I should tell you that this method is not really better (time complexity-wise) than the merge-sort tree. However, in practice, many queries of the merge-sort tree requires an $O(\log^2 N)$ time complexity per query. So, if the situation of $N \gg Q$ (meaning that $N$ is much larger than $Q$, a situation which does not happen very often in the CP scene) does not happen, this method seems feasible enough.

This methodology consists of three steps, "add", "build", and "query". The "add" and "query" step's implementation is a modification to the fenwick tree, and the only added step is "build", which is not complex at all. (You could even one-line it!) I will explain them in order.

This step consists of adding the element to the indices that needs to store this element. For example, if we need to add the element $2$ to index $1$, we add this to indices $[1,2,4,8,\cdots,2^{\lfloor \log N \rfloor -1}]$. This can be done by replacing the operation on the "add" step of the fenwick tree to a push_back operation. (assuming that bit is an array of vectors.) Code is as follows.

void add(int i,ll x)
{
while(i<MAXN)
{
bit[i].push_back(x);
i+=i&-i;
}
}


The time complexity of this step is $O(\log N)$ per addition, as there are $\log N$ indices at maximum that we need to add the element, therefore $O(N \log N)$ in total.

### Second Step: Build

Now after the "add" step, each index of the fenwick tree contains the elements of the indices it needs to manage. However, the vectors in the fenwick trees are not sorted yet. In this state, we cannot get an accurate answer from the queries. Therefore, we need to sort every vector in the fenwick tree. The code of this step is pretty straightforward. Code is as follows.

void build()
{
for(int i=0;i<MAXN;i++)sort(begin(bit[i]),end(bit[i]));
}


Now for the time complexity. The time complexity of this step is not so trivial, but we can prove an upper bound for it. First we need to prove this.

$a+b=N \Rightarrow O(a \log a) + O(b \log b) = O(N \log N)$

This can be proven through the following.

$a+b=N \Rightarrow \log a, \log b < \log N$
$O(a \log a) + O(b \log b) = O((a+b) \max (\log a, \log b)) = O(N \log N)$

Now given that we have at most $N \log N$ values in the entire fenwick tree, the time complexity will be at most:

$O(N \log N \log (N \log N)) = O(N \log N (\log N + \log \log N)) = O(N \log^2 N)$

Therefore this step's time complexity has an upper bound of $O(N \log^2 N)$. This bound is not very tight, and while I am convinced that a tighter upper bound can be proven, I was unable to do it myself. (Maybe I could prove $O(N \log N \log \log N)$?) Please help me find a tighter upper bound if you can.

UPD: If you sort the queries before you actually insert the elements, you can omit this step and get a more optimal time complexity, $O(N \log N)$, on building the fenwick tree. Credits go to darkkcyan for finding out this technique, you can read this blog post to learn more.

### Third Step: Query

Now that all indices of the fenwick tree are sorted, we can answer the queries. This step bases on the usual implementation of fenwick trees, and therefore finds the answer for a prefix $[0,x]$. Therefore, we can find the answer of many types of queries on the merge sort tree with $[l,r]=[0,r] - [0,l-1]$. Note that there may be types of queries that cannot be done like this, so this method does not completely replace the merge-sort tree.

Now for the example. Say, the query we want to answer is as follows.

$l \, r \, x$: Find the number of elements greater than $x$ in the interval $[l,r]$.

We can answer this query on a prefix $[0,x]$ by adding up answers making up the prefix. Therefore, the time complexity for answering about this prefix is $O(\log^2 N)$ per query, as there are $O(\log N)$ intervals, and we need to binary search on each partial interval. We can answer any interval by subtracting the answer for $[0,l-1]$ from that of $[0,r]$. Code is as follows.

ll query(int i,ll x)
{
ll ans=0;
while(i)
{
ans+=end(bit[i])-upper_bound(begin(bit[i]),end(bit[i]),x);
i-=i&-i;
}
return ans;
}


With this discovery, I have found that fenwick trees are much more powerful than I used to think. I think this usage of fenwick trees can replace merge-sort trees in many applications of it, and so I decided to give it a name: the Sort-Fenwick. The name comes almost from the merge-sort tree, but due the fact that there is no "merge" at all going on, I omitted "merge" from the name. I really like this methodology, as it is much, much more simple than implementing a merge-sort tree. Suggestions and questions are always appreciated, and thanks for reading!

• +69

By chromate00, 2 months ago,

On todays contest, at 1737B - Ela's Fitness and the Luxury Number, a lot of contestants were hacked or FSTed. This is due to the inaccurate nature of floating-point types. While I do think this happening on a B is not a good thing, but it happened anyways. So as we experienced a failure this time (myself a long time ago multiple times), we need to prepare for the next time it happens.

My solution to this was using Python's isqrt function. It receives a non-negative integer, and returns $\lfloor \sqrt{x} \rfloor$. It is guaranteed to return the accurate value, so this is the perfect tool for the job. I read this blog as well, and his points are valid. I still thought telling people about the isqrt function would be a great addition to the community as well. Shoutout to -is-this-fft- for writing that blog.

Including this time, there will be many situations where there is a better tool for the job. It is a good idea to actively look for them and use them to your advantage. This is the exact reason I am writing this blog, and other blogs such as the STL in CP series as well. I hope you try to do the same when learning and practicing CP as well.

p.s. I think there should be other languages with builtins serving the same functionality, or ways to do it in the language you use. Please suggest it in the comments section below! It would be a great addition to the topic.

UPD: I just found that this wikipedia page exists, please take a look if you're interested in other methods to do this!

• +36

By chromate00, 2 months ago,

https://codeforces.com/blog/entry/10355 — I hope you read this blog before reading this blog. Basically, the SGI STL implements the "rope" data structure, which is a tree-based data structure for doing operations in $O(\log N)$ time complexity. You can read ahead if you are already familiar with the data structure.

Before we start, Let me explain to you the context on how I thought about this "trick". The rope implementation in SGI STL (from which GNU G++ borrows many extensions) provides many operations given on strings. The most important out of them would arguably be substr (Splitting a substring into another rope) and + (Merging two ropes into one, effectively "concatenating" the strings). There are more functions too, but there is no function to reverse a substring.

Now back to my side of the context. I was thinking about a way to solve this problem.

Given a string and $Q$ queries of the following kind, output the resulting string after all $Q$ queries have finished.

l r: reverse the substring in the interval $[l,r]$.

(This problem appeared on the Croatian Programming Contest 2010, you can try to solve it here.)

Now some of you might know a clear solution already — to use a splay tree. While other BBSTs can work with this type of queries, the splay tree would be one of the most well known ones. However, I do not know how to implement splay trees, but I do know that the rope exists. After a few hours of thinking, I came up with this solution.

Let us manage a rope $R$ with the given string $S$ and the reversed string $S'$ concatenated. If we denote the original length of the string as $N$, the new length of the string in the rope would be $2N$.

For all closed intervals $[l,r]$ representing a substring $s$, given that $1 \leq l \leq r \leq N$, we can also find a interval $[l',r']$ representing the reversed substring $s'$ in the same rope. And as clearly you may see, this interval $[l',r']$ corresponds to $[2N+1-r,2N+1-l]$. Now we can split the rope into five intervals based on these intervals we have found.

These five intervals I am speaking of are the two intervals we have (one from the query, one mirrored from that) and the three other intervals making up the rope. So the whole rope, represented by the five intervals, would be $[1,l)+[l,r]+(r,2N+1-r)+[2N+1-r,2N+1-l]+(2N+1-l,2N]$. Now we can swap the interval from the query with the mirrored interval. This new rope would be represented as $[1,l)+[2N+1-r,2N+1-l]+(r,2N+1-r)+[l,r]+(2N+1-l,2N]$, and would contain the new string and its mirrored one, concatenated. The time complexity for doing this operation would be $O(\log N)$, the same with the rope's time complexity.

Now for the output, we can save the result in a string and discard the latter half, as we do not need the reversed string now. The problem is solved.

The implementation of this solution is very simple, we can already use the functions given by the rope implementation (stated as "the most important" ones above). In my opinion, it is much simpler and easier to understand than implementing a splay tree. Last but not least, it also supports other operations possible on a rope (you can just mirror it on the reversed half as well). Thank you for reading, and as always, suggestions and questions are welcome.

• +25

By chromate00, history, 2 months ago,

(This was previously a blog reflecting my changes, but I just decided to use it only as a QnA due to some suggestions in the comment. You can see what was written in the revision history)

Hi this is chromate00, and ask me anything. Literally, anything. Criticism included.

• -71

By chromate00, 2 months ago,

Named Requirements are a summary of what the STL expects for certain classes/types to be passed to functions as an argument. Understanding these are important for writing code that works well with other functions in the STL. There are many places you can read them on, but I personally prefer Cppreference. Let's take the following statement as an example.

Compare functions should return false when the two values are equal

This is explicitly stated on the named requirement named Compare, the parts that state this are as follows:

Given comp, an object of type T, For all a, comp(a,a)==false

From this we can see that objects of type T, when called as f(a,a), should return false. Obviously the two as are equal, and we can expect that the STL functions may (or in this case, will) spit unexpected errors if the requirement given in the statement is not satisfied.

The above was an example of named requirements, from a statement relatively well-known in CP. And in this example you can see that following the named requirements is very, very important.

Now we need to understand exactly how we should read the named requirements. There are many different named requirements, and not all named requirements' descriptions look the same. Noticing the difference before understanding them is helpful, so I shall explain what is different in these requirements.

• Some requirements have a very "short and concise" description.

The Predicate requirement is a good example of this. Let's see the description of the requirement, and try to understand it.

The Predicate requirements describe a callable that returns a value testable as a bool.

A good way to understand such descriptions is cutting them phrase by phrase. We can apply this method to this description like this.

"The Predicate requirements describe..."

• a callable means that this requirement includes the Callable requirement
• that returns a value means that the return type of the call is not void
• testable as a bool means that the returned type, either is bool, or can be contextually converted into a bool-type value. the types that fit this condition include int, long, and most importantly bool.

• Some requirements have an "organized table" in its description.

The UniformRandomBitGenerator requirement is a good example. In its description you can clearly see (with the table) what functions/expressions you need to implement for this requirement. The table provides information on what methods it requires, what return types they need to have, and extra requirements needed to fit the requirement.

(Red = The members you need to implement, Blue = Their return types, Green = Implementation details about the members. most other descriptions have a table with a similar format as well.)

• Some requirements have a "dependency" in its description.

The named requirements for iterators show this "dependency" well. Basically when we say

The type T satisfies A if ... the type T satisfies B, ...

Then the named requirement "A" has the named requirement "B" as a prerequisite. Therefore to implement a type satisfying A, it would be convenient to implement methods for B first.

These three are the ways how (at least I thought) named requirements are described. It would be good practice to try these methods on other named requirements, or come up with your own way to read them as well. This was the part 1 of "Understanding Named Requirements", and in part 2 I will demonstrate making an actual working class based on the RandomNumberEngine requirement as a practice for understanding the descriptions. Stay tuned!

• +31

By chromate00, history, 2 months ago,

Let's begin this story with what happened two days ago.

Well, this happened. Too bad. Maybe I'll just go back to life, I presumed. Until I noticed the line -

You should not publish comic content, especially if it is not interesting to a wide part of the audience, repeats the existing one, or has no connection with competitive programming.

And oh, that was the line I once did not know of. I thought some comedy was fine, still a big part of this community is made of comedy. And it wasn't. The thing is, while I sometimes feel like shitposting, I agree on that we need some part of the community which solely concentrates on learning, teaching, and experimenting. I considered problem solving a quite major part of my life, and for the very same reason I tend to use a lot of time on Codeforces. After all, my vision is to become an algorithm researcher, like people you all know, such as dijkstra, floyd, etc.

This was a lesson learned. and next time you see me in comments/blogs, you'll see me as trying to be as helpful as possible. I expect the next blog to be a continuation on the STL series (actually I've been working on it in a draft), and after that I will explain some useful things too. See you next time in another (helpful) comment/blog.

• -65

By chromate00, history, 3 months ago,

This (quite sudden) post is about a data structure I have came up with in one day in bed, and this won't be a full series(I am yet a student, which means I don't have time to invent data structures all day and all night). Still, Thinking about this, I thought it would be a good idea to cover it on its own post. So here we are.

### Concept

The traditional bit trie uses one leaf node for one number, and every leaf node has same depth. but about this, I thought — why? "Do we really have to use $32$ zeroes, just to represent a single zero? Heck no. There has to be a better way." And this is where I came up with this idea.

Instead of using the same old left-right child, let's use up to $l \leq depth$ children for one node. $l$ is the suffix part when each node represents a prefix. For every prefix node, we connect prefix nodes with only one $1$ bit determined after the current prefix. For example, under $1\text{xxxx}_2$, there are $11\text{xxx}_2$, $101\text{xx}_2$, and etc. Then, while the undetermined suffix (ex. the $\text{xxx}$ in $1\text{xxx}_2$) is, of course, undetermined, we can assume they are $0$ for the exact node the prefix exists on. Then we can use the prefix node $1\text{xxxx}_2$ for $10000_2$ also.

### The Important Detail

At this moment, you should be wondering, how do we distinguish the node for the number $10000_2$ and the prefix $1\text{xxxx}_2$? They use the same node after all. My conclusion? You don't need to. To do this, you can just save the size (amount of elements) of the subtree. Now, let us denote the size of the subtree of prefix $S$ as $n(S)$. then $n(1\text{xxxx}_2) = n(11\text{xxx}_2) + n(101\text{xx}_2) + \ldots + n(10000_2)$ applies. So you can just traverse the child nodes one by one, and the rest is the number itself.

### Traversing the Bit-indexed Trie

Using the "important detail" stated above, traversing the Bit-indexed Trie boils down to simply interpreting it like a binary tree. We start at the root, which is $0$, and we can interpret this node as $0\text{xxxxx}_2$. This root node may (or may not) have $01\text{xxxx}_2$ as a child node. Important point here is to keep a variable for the size of the "virtual" subtree of the current node. (we will denote this as $c$.) If the subtree size of the current node ($0\text{xxxxx}_2$) is $s$ and that of the right child node ($01\text{xxxx}_2$) is $s_1$, then the subtree size of the left child node, when interpreted as a binary trie, should be $s-s_1$. So if we want to traverse towards the right child node, do so, and update $c$ to $s_1$. On the other hand, if we want to traverse towards the left child node, stay on the current node, assume that we did move ($0\text{xxxxx}_2$ yet shares the node with $00\text{xxxx}_2$), and update $c$ to $c-s_1$. After understanding this, almost everything goes same with the usual trie.

### Interesting stuff about the Bit-indexed Trie

With the fact that a single number is represented as a single node in the data structure, we can use a hash table to represent the whole trie. And what's so great about this method? It lies in its simplicity. Let's assume the trie is represented with a unordered_map<int,int> type. (the key is for the node, the value is for the subtree size) Now inserting a value in the trie is as simple as this:

Insertion to the Hash-table BI-Trie

and that is it! Simple, isn't it? and here comes the reason I named the data structure "Bit-indexed Trie". Many should have noticed the similarity of the name with the Bit-indexed Tree, also known as the Fenwick Tree. (I could not call it the Fenwick Trie, Peter Fenwick simply did not make it) The Bit-indexed Trie even has many common facts with the Bit-indexed Tree! Here are some:

• It can replace the traditional bit trie, similar to how the BIT replaces the Segment Tree in many situations.
• It reduces the memory usage from $2M$ ~ $4M$ to $M$, as they save values in every node, not only the leaf nodes.
• They have very similar implementation in many parts, see the snippet above.

also, as we saved the subtree sizes, accessing the subtree size information turns out to be almost $O(1)$ (assuming the nodes are saved in a data structure with $O(1)$ random access). Even if you don't make it $O(1)$, I believe the improvements will be quite significant, as it would be possible to optimize some processes with __builtin_clz and such bitmask functions.

EDIT: errorgorn and Keewrem told me that finding the amount of numbers with a certain prefix is not $O(1)$, and they are correct. It turns out to be $O(\text{number of trailing zeroes in the prefix})$.

### Summary

In this post, I covered the concepts and details of the data structure I have come up with, which makes it possible to reduce the memory usage in a bit-trie to half of the usual trie or even further. I look forward to release a full C++ implementation of it soon, and I hope many people would notice the beauty of this data structure. Thank you for reading.

• +33

By chromate00, 3 months ago,

Alas, It's time for another day of STL in CP. Last time we covered the Non-modifying sequence operations, and as you may have expected, we shall cover the Modifying ones this time. The concept is simple- they simply modify a range or a variable (or multiple of them) which you have given to the functions. But the usage? There are something deeper when we think about the details of these functions. In this article, We will cover trivial functions quickly, and then take some time in looking at the ones with more interesting usages.

### copy, copy_if, copy_n, copy_backward

These are quite trivial, they do what their names suggest. copy copies, copy_n copies $n$ elements from the beginning, copy_backward does the same thing with copy but copies the last element first. Time complexity takes $O(n)$ assignments, so it's $O(n)$ assuming the types you're copying takes $O(1)$ to assign. Otherwise, it's multiplied by their time complexity, obviously. copy_if is a bit more useful, though, as functions with filtering features are always useful somewhere.

### move, move_backward

Those two do the same thing with the copy ones, except this one moves the elements, meaning that it reuses the addresses of the original range. It may use less time than copy, but is only useful when you do not need the original range. beware!

### fill, fill_n

If you can read, you should know what this function does. it simply "fills" a range. Time complexity is $O(cn)$ where $c$ is quite obviously the time for one assignment, which is most likely $1$ or some small constant unless if you're filling a multidimensional container, say, a vector<string>.

### transform

This is when things get interesting. Basically, this function applies a function to all elements in a range (or two) and copies the results to another range (or writes it in-place if you want it to). Why is this interesting? Because it can work in all situations where you want to apply a function to a range or two, basically any function in the format of $y = f(x)$ or $z = f(x,y)$. And for this reason, this function can be used in implementing Sparse Tables. The code is as follows:

Snippet
Code getting AC on 'Static RMQ' @ Library Checker

### generate, generate_n

This function is used when you want to fill a range with values generated by a function, functor, lambda, or basically any callable. This being said, this includes an RNG, meaning this function can be used for filling an array with random values. More interesting things can be done also, such as filling the range with a certain integer sequence, for example the fibonacci sequence. Just use something such as [a=0,b=1]mutable{int c=(a+b)%1000000007;a=b;b=c;return a;} as the function, and this function will be filling the range with the fibonaccci sequence. I think there should be more interesting and creative uses of this function, let me know in the comments if you know any.

### remove, remove_if, remove_copy, remove_copy_if

These functions are supposed to "remove" elements from the range. However, the functions can't just "resize" the range, they do not have access to the entire container. Therefore, it is almost always used with erase (member function of the container). This is called the Erase-Remove Idiom. However on C++20 and above, the Erase-Remove Idiom is unneeded in most cases. Instead of it, you can use erase(container, value) and erase_if(container, function). While this is the case of resizable containers, the range used with this function does not have to be one of a resizable container. For example, when you want to use it with an array, you can maintain a size variable, and update it when you run this function. Like this — sz = remove(arr, arr + sz, x) - arr. The latter two do the same operation while copying the result to another range. They are usually unused in CP (we do not want to add another factor to our space complexity), but if we want to preserve the original range, then the two may be used instead.

### replace, replace_if, replace_copy, replace_copy_if

Does the same thing as remove, but instead of removing from the range it replaces them to another value. These do not resize the range, so they do not need to be used with erase. Same opinion as above about the latter two, you may use them when you need to preserve the original range. I have still not seen someone actually use the latter two in CP, though.

### swap, swap_ranges, iter_swap

Straightforward. swap does what it suggests, swap_ranges swaps all elements in two ranges. (like this short snippet — for(int i=0;i<n;i++)swap(a[i],b[i]);) And iter_swap is literally swap(*a,*b). Do I need to explain further? I don't think so.

### rotate_copy, shift_left, shift_right

The former two reverses a range, the medium two rotates a range, the last two (added in C++20) shifts elements in a range. Why did I list these three sets of functions in the same paragraph? Because they can serve a common purpose in CP (especially Codeforces) — Reducing the hassle of implementation. Usually D2A and D2B can be solved relatively quickly, but a fast mind may not be very helpful when the implementation is quite complex. This is where these functions come into action. Here is a practice problem in which one of these functions will help you, I suggest you try it if you didn't already. (Problem: 1711A - Perfect Permutation) These functions are useful in many problems with constructive algorithms in general, so it would be a good idea to use them when you can!

### shuffle, sample

Oh, shuffling and sampling, the two important operations in randomized algorithms. The former shuffles elements in a range uniformly with a given RNG. "uniformly" here is important, it's the exact reason random_shuffle was deprecated and then removed in C++20! So make sure not to use random_shuffle in all cases. You can learn more about it in this blog. Now the latter function should be used with care, it samples $\text{min}(N,\text{size})$ distinct elements in a range. However sometimes you might just want to pick $N$ elements allowing duplicates, or the situation might be that $\frac{N}{\text{size}}$ is so small that getting a duplicate is quite unlikely. In this situation, it would be reasonable to just generate $N$ indexes in the range of $[1,N]$, right? However, sample has a time complexity of $O(\text{size})$. Therefore, you need to be careful when using this function, it might cause unwanted TLEs.

### unique, unique_copy

These two functions (latter copying the result to another range) remove adjacent duplicates from a range. As it cannot resize the container by itself, usually it is used with erase, similar to the Erase-Remove Idiom explained above. Removing only adjacent duplicates means that this function, alone, can't remove all duplicates in any given range. Therefore, for this function to be able to remove all duplicates in the range, the range needs to be sorted. However, this does not mean that this function is only useful when the range is sorted. There are situations when we need to check adjacent groups, one example would be GCJ 2022 Round 1C — Letter Blocks. In a part of this problem's solution, we need to check if a string is "grouped" (i.e. each alphabet appearing in the string make up one contiguous group). Of course, you can do this by counting in $O(n)$, but I felt this method was very ugly and complex in terms of implementation. I have come up with a slower but elegant way to do this, which has an $O(n \log n)$ time complexity. Here is how.

How to do it — What I call the 'Double-unique method'

In this section we reviewed the modifying sequence operations, and found out situations where they can be used. In the next section of Chapter 1. Algorithms, we will be reviewing a wide variety of functions, such as sorting, merging, binary search and more. See you on the next section!

Back to chapter 0

• +13

By chromate00, 4 months ago,

Last time, we reviewed very briefly about what is STL. In this Chapter of the series, we will be talking about the algorithms it provides, and their value in CP (speaking of "value", some functions may be almost absolutely useless in CP due to another feature or some reason). We are not going to talk about using ranges for now, they are bound for another article. In Section 1 of Chapter 1, we will be talking about Non-modifying sequence operations. (I suggest that you read this article side by side with Cppreference, I get a lot of ideas and knowledge there, and the Sections are divided for convenience in reading side by side.)

### all_of, none_of, any_of

These three functions are quite straightforward, and have $O(n)$ time complexity (if the passed function has $O(1)$ time complexity). They may be useful in many situations, but I have not found a real usage example yet.

### for_each, for_each_n (C++17)

These two functions apply a certain (unary) function to each element of a container. Their time complexity is, quite clearly, $O(n)$, considering the case when the function applied takes $O(1)$ time. However, the former has been useless in my case, as range-based for statements were sufficient for me. The latter, though, may be very useful, and is not replaced by the range-based for statement.

### count, count_if

These two functions count the number of elements in the container that, for the former, matches a value, and for the latter, meets a condition. Their time complexity is $O(n)$, assuming the function takes $O(1)$ time. Both are useful in my opinion, but the latter is much more important in CP. This is because CP asks for the amount of values meeting a certain condition, more frequently than asking for a certain value.

### mismatch

This function iterates through two ranges of iterators and then returns the first position where the values differ or the function passed returns a falsy value (which is 0). The time complexity is $O(n)$ assuming the function passed (== by default) takes $O(1)$ time. Now this function is very important, in the sense that we can compare two ranges and check if all elements in range A meets a condition relative to range B. We can even check if a range is a palindrome with this function, by passing the iterators and reverse iterators to this function, and checking if the function finished at each ends. Do note that this function returns a pair of iterators, and this in turn could give an advantage over using the equal function (which only returns a boolean value).

### find, find_if, find_if_not

I am not going to talk too much about these functions in this article, their uses/time complexity/etc are too trivial and well-known. However, do note that, for the same reason stated about count_if, the latter two have relative importance over the former one.

### find_end, search

In my opinion, I thought find_end would have been better named as search_end, as search does the same thing, except find_end searches for the last occurrence while search searches for the first. These functions by default searches naively, so it has an $O(nm)$ time complexity. Do use KMP when needed; naive (and Boyer-Moore in its worst case) searching methods are too slow for CP.

### find_first_of

This function finds the first occurrence of any value existing in a certain set (represented with a range) in a range. For example, you can find the first occurrence of any value in {2, 4, 6, 8} in a vector. This function has an $O(Sn)$ time complexity, and therefore when looking for exact matches it can be replaced with the tree-based/hash-based containers for $O(log(S)n)$, $O(n)$ time complexity correspondingly. However these container-based methods might not do very well when in need of applying a function to compare the elements (trust me, traversing these containers are $O(n)$ but it is a pain due to cache issues), so in this case this function may be helpful.

### adjacent_find

This function compares adjacent values, and returns the first position where the values are identical, or meet a condition when passed a function. This function has $O(n)$ time complexity, assuming the comparison function has $O(1)$ time complexity. I have found usage of this function in 1713B - Optimal Reduction, consider the following explanation as a solution for the problem.

Explanation for 1713B

As shown in the explanation above, adjacent_find has great strength in comparing adjacent elements inplace, without making a difference array. For this reason, I consider this function very important in CP.

### search_n

This function finds the first point where $n$ consecutive elements in a row exist, which are same with the given argument (or meets a condition compared to it). This may be useful in problems such as when you need to find $k$ consecutive positive numbers. Do remember though, that the given function takes the elements on the left argument and the argument in the right argument.

In this article, we reviewed the Non-modifying sequence operation functions in the Algorithm category of STL. In the next section, we will be reviewing the Modifying sequence operation functions.

Back to chapter 0

• +7

By chromate00, 4 months ago,

I, you, and probably everyone using C++ for CP are using STL. It wouldn't even be a lie if I told you that std::sort probably saved many people's lives. However, std::sort is not the entire STL. Even combined with the relatively-wide spectrum of features we use daily, we can't say we're using the entirety of STL. Therefore, I am writing this series of blogs to review a lot of important STL features, and give you some examples of using them in actual CP problems.

So, before we begin, what really is STL? Is it a part of C++? Quite clearly, yes. Is it C++ itself? Almost, but no. The STL is almost analogous to the Standard Library for C++ in most cases, but even the Standard Library and the STL are two different and separate identities. (Even there are cases where you can use C++ but not STL.) In this series I will try my best to not mislead you by saying the Standard Library and the STL are same things.

What does the STL provide? It provides containers, iterators and algorithms. In this series, we will cover algorithms, containers and iterators on their corresponding posts, and then after that, we will look at other notable features (which may not arguably be part of STL, but still are quite useful and important). I shall update this article with a link to the next chapter as soon as it is out.

• +4

By chromate00, history, 5 months ago,

It is well known that fenwick trees can easily implement Range update/Point query or Point update/Range query on many kinds of operations, and XOR is one of those operations. Also, a range update/range query fenwick tree with sum queries has once been implemented (AFAIK it has been first shown on Petr's blog, correct me if I'm wrong) by treating the range update as updating the first-degree coefficients and the constants separately.

Now I am writing this blog to ask this question: can we expand this idea to operations other than addition, such as XOR(or yet even multiplication)? if it is possible, I would also appreciate an explanation on how it could be implemented.

• +11

By chromate00, history, 6 months ago,

In this blog post (which I am shamelessly using for practicing how the blog works, nevermind about this), I am going to introduce to you, my codeforces setup.

First and most importantly, the chair. Some of you may understand the importance of the chair in competitive programming, while others may not. In my opinion, a fitting choice of chair is very crucial to getting a good rating on codeforces.

Aaaaaand here's a brief pic of the chair I use during codeforces rounds.

The chair

(Yes, I searched up the pic on google but I can assure you the chair is more or less identical to the one I use.)

So what do I really mean?

• +44