Page 1 of 1
Important decision for 0.7. Need your input.

Posted:
Jul 13, 2004 @ 2:59am
by kornalius

Posted:
Jul 13, 2004 @ 4:09am
by mervjoyce

Posted:
Jul 13, 2004 @ 4:24am
by PointOfLight

Posted:
Jul 13, 2004 @ 9:57am
by webba
Kornalius,
Hi. I'm not an accomplished C coder. I used to use OnboardC for Palmos when I had one - I loved it. It would generate a skeleton event structure and you just plugged bits of code in where you wanted them. There was an Onboard gui form builder and resource compiler. It was so simple to use... Those were the days.
The point is, it produced apps which were fast. They were about 10% faster than equivelant code generated by gcc. To me speed of processing is one of the top bullet points you advertise about a language. Why? Cos its what the end user and therefore the programmer wants and it ultimately gives you more scope, more choice, more excellence.
I can see what mervjoyce and pointoflight are saying: What good is an unreliable Hare in a race? Give me a Rabbit that gets past the finish line.
Your dynamic compiling method seems like an inspired choice. I don't pretend to know how it works but if it was me, and all things being equal, I don't think I'd abandon the idea too quickly. Could you keep it as a feature only to be used on code where it doesn't cause any bizarre problems?
How does the dynamic compilation work? Maybe talking it through with the guys on here would help identify the problem.
Hope this helps.
Kind regards,
Andy

Posted:
Jul 13, 2004 @ 11:50am
by redshift
Stability and portability before everything, but also i'm not really interrested by the gaming API.
Speed is somehting to improve later i think.
Idea:
A way to compile it on the desktop X86 should be really great, and should also normally speed up the build since there is a lot more power on the desktop. That way, only when the applicatin is finished we can build it on the ppc, and it take sometimes only once.

Posted:
Jul 13, 2004 @ 2:07pm
by kornalius
Thanks for the quick replies guys,
I am not putting the dynamic compiler away, I will simply add the other compiler / interpreter in. I will do some tests and depending how it performs, I will either keep it or discard it.
The dynamic compiler is stable but there is some situations where it does seem to work like expected.
The way the dynamic compiler works is simple. Take PPL parsed code and translate it to ARM assembly bytes in memory and call the routine using it's address.
Regards,
Kornalius

Posted:
Jul 13, 2004 @ 3:35pm
by webba

Posted:
Jul 13, 2004 @ 4:26pm
by kornalius
Hi Webba,
No, PPL compiles all the routines (proc/func) to ARM and then calls the main routine. What do you mean by cache? I have read somewhere about caching memory allocation but it doesn't make much sense to me.
So far, a simple while loop test with the new compiler / interpreter is showing amazing results. Here are the results I am getting compared to the old compiler:
Old New
-----------------
#1 1015 ms 565 ms
#2 680 ms 398 ms
Strange but it's fast and will allow me portability, easy to debug and decent speed at it.
Regards,
Kornalius

Posted:
Jul 13, 2004 @ 5:01pm
by webba
Kornalius,
My questions about caching were based on an assumption that each routine was being compiled Just In Time ala java... I wondered if it was ending up recompiling the called routine for every iteration of the loop. Obviously not.
So the whole source is compiled. It still sounds like an excellent way of doing it.
I can't understand why the timings for interpreted are so fast??? Almost twice the speed as compiled. I imagined it would be the other way around...
Nothing else is springing to mind.
Kind regards,
Andy

Posted:
Jul 13, 2004 @ 6:06pm
by dan33
That does seem a little odd that the interpreted is getting better performance than the compiled version.

Posted:
Jul 13, 2004 @ 8:07pm
by kornalius
Odd, but don't forget PPL is still an interpreted language after all. The machine language code relied on internal variables, therefore for each function call a series of machine code bytes had to be produced. The code produces was not as compact and optimized as Evc would do it.
The way I interpret the information with the new interpreter is pretty much the same (internally), so the speed is about the same and sometimes faster, due to Evc optimization I guess. The interpreter is now about 4 lines of C code!

The compiler is smaller and faster too.
Regards,
Kornalius