Planned benchmarks


Do you plan to release new benchmarks? If so, when do you plan to do it?
Even if you won’t, have you developed a model internally? Do you have an idea of the type of performance that could be achieved?


We are not planning to release any additional benchmark, as we have no other internal model.

We are expecting this challenge to be… challenging!

However, the current top scores in the leader board provide alternative benchmarks and do show that it is possible to significantly improve the benchmark score. How far is it possible to go? We do not know but are looking forward to being impressed!

Hi Remjez,
I can propose to you another benchmark of the same type as the one CFM submitted.
If you take the median of each series, you get a accuracy of 26.486% whereas, with the mean you get only 28.207%. Maybe understanding this difference can give some informations or not :p.

Thank you for your response to both of you.
It’s interesting to see that the median significantly improves performance.
But I have a score of 21.5 and I wanted to know if it was possible to do better. But I guess you just have to wait.

Nobody knows yet if it is possible to do better in a robust way (i.e. that would yield good performance on a different test set). That said, I would not be surprised to see improvements over this score in the coming months, as participants will have many interesting new ideas.