Some thoughts:
- The questions are difficult to understand. I would give more samples.
- I don't understand why are the test cases hidden. Are you testing the candidate's ability to decypher your text or writing a working code?
- I think these questions aren't very relevant to a screening test.
That last point deserves some explanations.
We're going through the same thing at work, designing an automated screening test for interviews. The biggest challenge we face is that
a false negative is much more costly than a false positive. Interviewing the wrong candidate will cost you 30 minutes, one hour at most. Missing out on a good candidate will cost you a lot more, because good talent is rare.
First of all, I think these are difficult questions. It took me 10 minutes to understand what's expected of me for each question, and I'm not even under the stress of a job interview. There are really good employees who will fail these tests, and you'll miss out on a chance to hire them.
Second of all, I don't think that algorithm skills is the only thing you want to screen on. As a matter of facts, I think it's the last thing you want to screen on. I would rather have a test that makes sure that the candidate knows much more relevant skills like:
- SQL and some database basics.
- Basic unix administration
- Familiarity with source control systems
Even if your screening test ends up being a multiple choice bullet point form, it's still more relevant than complex algo questions. These are fun competition between programmer, but not a good way of determining who would be a good hire.
The goal of the screening test is not to qualify the best (no automated test will ever do that), but to weed out the hopeless cases, the (surprisingly high) percentage of candidates that are a waste of time. In that aspect, I think replacing your questions with something as simple as a fizzbuzz would be enough.