Base hypothesis
Let me start by stating the obvious: this kind of nano-optimizations is ludicrous, cute and entertaining. But it's not something anyone will ever need to care about. (Not to mention the futility of optimizing a brainfuck program for performance). Now that being said, I think that this makes for an interesting exercise. For the sake of sanity, let's make some assumptions first:
- We'll only consider in our measurments the cost of in/decreasing the value of a cell and translating the pointer. That means that we're only considering the operations + - > < and ignoring the rest. That means we're ignoring jumps, tests and io operations. Note that I have no idea how CPUs work so I don't know if these assumpstions are valid.
- Again for the sake of sanity, we'll consider that each of the 4 operations are equivalent in performance. That means that given a pointer on int i, I'm considering that i++ and *i++ are equivalent in execution time.
Counting the operations
Here's the generated piece of code:
+[--------->++<]>+.
Each iteration of the loop has 13 operations, and it's not difficult to realize that this loop is going to be executed 57 times. Adding the 3 operations outside the loop, that's a total of:
57 * 13 + 3 = 744 operations.
In contrast your piece of code is:
+++++++++++[>++++++++++<-]>+++++.
Although it looks like it has more characters, it's actually doing a lot less operations. Your loop has 13 operations as well. It's executed 10 times, and it has 16 extra operations outside the loop, so:
10 * 13 + 16 = 146 operations.
By most measurements, your code
should execute faster.
Conclusion
It should execute faster, but it's not going to. Because the way CPUs work internally, and because of the assumptions we made, and most importantly, because any modern CPU will execute this instantly. Something as simple as printing "hello world" in a language like python needs several orders of magnitude more operations and is executed instantly. It is virtually impossible to measure any difference in execution time between the 2 pieces of code you're trying to compare.
Moreover, even considering you manage to create something remotely usable in brainfuck, this kind of optimization is almost never relevant. Most programs will be stuck waiting for some sort of IO operation, (and in 2014 this IO will be done in a network-distributed fashion), so caring about shaving off microseconds of CPU time is often refferred to as "premature optimization" (a.k.a "the root of all evil")
Going further
That being said the exercise is not completely irrelevant. Brainfuck never was (and never will be) a language with any real world value, but it's great for exerising. Here are some ideas you can explore:
- The biggest mistake your program makes is assuming that you need to output only one character and exit. That will never happen. Instead you'll be asked to print several characters to form a sentence. It would be smart to start by pre-populating a fixed number of cells with values that should cover a good range of the ascii alphabet so you avoid raising these values each time. We have already explored something similar on this forum.
- As I mentioned before, it doesn't make sense to optimize for performance, but it would make sense to optimize for readability. Try to write a program that takes a string as an input and outputs the shortest brainfuck program that will output this script to the screen once executed.