Comments

Your challenge won't be to prove to them that you're a good problem solver. It will be to show that these abilities are a benefit to their team. So you'll have to show that your software engineering practices are up to par and not resembling contest programming. That you can accept when others write suboptimal algorithms and not preemptively optimize code. That you as a junior programmer (in their eyes) can accept feedback and don't consider yourself too much of "an exceptional candidate".

Also it might help to explain that you don't prefix your production code with (from your submission):

const int magic = 68;

My personal tip is to bake pancakes for the entire team. They tend to consider that a major benefit.

On MidoriFuseI killed KAN, 5 years ago
+89

That's not very freundly of you.

Unfortunately CF doesn't give an estimate for rating uncertainty, but if you could somehow incorporate it into the statistics that'd be interesting. I.e. check and mark people who have only done very few contests or whose last participation was long ago. This way it should be possible to differentiate between noise due to an inaccurate CF rating and noise due to actual differences between IOI and CF contests.

On spam_123What happened ?, 5 years ago
+25

It was the open hacking phase.

Because the server ran out of steam?

Is it rated?

0

Isn't standard DP fine for D? E was interesting.

Seeing that there is quite some time between these two submissions, this is probably a difference in the way memory usage was measured by the judge. If you submit your older solution again you should get the same memory usage. See here.

The only difference between bitwise and modulus operator is in their behavior for negative numbers, otherwise the compiler would directly replace the modulus by a single bitwise operation. It does so if you check for even numbers. So there's no reason this would have a notable effect on memory usage.

On AmShZYour favorite problem? , 5 years ago
+6

IOI13 artclass

Yes, WHR as a whole is definitely not the right fit here. Partly because of unnecessary features and partly because the complexity, especially retroactivity can be very confusing to users. I just thought some concepts might still be interesting, if not for ranking users directly but simply for making nice comparisons.

As a little inspiration, here's an example plot done with WHR in a 1v1 setting, comparing two accounts controlled by the same person: Rating history example picture Source

Thanks to the retroactivity it is usually easy to differentiate quick learners from people who've had previous experience. This is with the expected elo variance per day set to 500, instead of 14 as suggested in the paper. Still, the graph can smoothly model periods of skill change as well as stagnant phases.

+5

So you're trying to do a lower bound estimation, such that you can guarantee a X% likelihood for a person to be of their displayed rating or above, right?

Have you looked at Bayesian Elo? I.e. calculating Elo using a maximum likelihood estimator. I think it's a great way to improve convergence of classical algorithms and also get a good error estimation. If you haven't already seen it, I suggest you check out Whole History Rating which makes use of that. I also have an implementation if you want to try it.

+5

I like the descriptions but I wouldn't take them too seriously in relation to IOI/ACM. Coming to codeforces after having done both I do feel that the problems here are noticeably different. This is to be expected seeing that purely algorithmic tasks on here wouldn't be much more than a test of your templates.

-7

Nice! A round just for us newbies.