[Brought here from LispSucks, since that page already had more clutter than it needed...]
Don't you mean "at assembly"? And isn't an editorial comment to that effect called for? We may have a variety of reactions to the asm below; it doesn't entirely speak for itself.
Negative. This page was abstracted from the original LispSucks page and brought here just to keep the yards of assembly listings out of the main stream of ThreadMode conversation (read, bickering). Therefore, "Lisp sucks, in assembly."
This still isn't clear English, even given your explanation. This amounts to something like "Lisp sucks: assembly examined" or some such?
Not really. It could have been, "Lisp sucks, in Swedish" or "Lisp sucks, in Klingon" but it's in assembly. Eh?
Isn't blazing fast? I guess you aren't talking about CommonLisp then.
I agree that Lisp is faster than most of the popular languages today - JavaLanguage and PythonLanguage. I was really only talking about CeeLanguage and CeePlusPlus, which are still the dominant languages for programming things like servers and large GUI based applications. It's worth pointing out that in both of those cases it has been found useful to embed Lisp in the program.
For my own code, which tends to be highly numeric and nothing like business apps, I usually find Lisp to range around +- 20% of c++ (yes, sometimes it is faster). Sometimes, however I findit is a factor of 2-5 slower on the same code where c++ had a particularly clever optimization I can't get my lisp compiler to make. On the other hand, sometimes my lisp is a factor of 2-5 faster, if I implement an algorithmic improvement which is simply too much trouble in c++. Of course, YourMileageMayVary, but I would expect that raw execution speed is almost never a compelling argument for c++ over CL, for 'normal' applications. I myself touch on some applications (large-scale simulations) where raw execution speed is crucial, but in that case you run on special hardware and expect to jump through many hoops to get that speed, including things like re-implementing in fortran or c++ with compiler-specific extensions etc. Large server apps may have similar constraints, but we are hardly talking about mainstream development now.
ComputerLanguageBenchmarksGame
Without exception, every proprietary Lisp implementation on the market has a native compiler that puts out good machine code and respects optimizing declarations. And some free ones do also.
Some Lisps, like Conan Lisp, do not even contain an interpreter! Every expression processed by Conan Lisp is converted into 80x86 machine code and then executed as such.
And many Lisps are not afraid to get you close to the metal. The disassemble function is a nice example (this is in CMUCL):
* (defun sum (n) (loop for i from 1 to n sum i)) SUM * (sum 5) 15PJB: how convenient an example. Rather, try: (sum 5000000000). See also: https://groups.google.com/forum/message/raw?msg=comp.lang.lisp/a36NKUYogvI/KWBtYkb-SXoJ (compare Lisp and C with factorial instead of sum).
* (disassemble #'sum) Compiling LAMBDA (N): Compiling Top-Level Form: 48131B90:.ENTRY "LAMBDA (N)"(n); (FUNCTION (T) NUMBER) BA8:POPDWORD PTR [EBP-8] BAB:LEAESP, [EBP-32] BAE:MOVEDI, EDX BB0:CMPECX, 4 BB3:JNEL2 BB5:MOV[EBP-12], EDI BB8:MOVDWORD PTR [EBP-16], 4 ; No-arg-parsing entry point BBF:MOVDWORD PTR [EBP-20], 0 BC6:JMPL1 BC8: L0:MOVEDX, [EBP-20] BCB:MOVEDI, [EBP-16] BCE:CALL #x100001C8 BD3:MOVESP, EBX BD5:MOV[EBP-20], EDX BD8:MOVEDX, [EBP-16] BDB:MOVEDI, 4 BE0:CALL #x100001C8 BE5:MOVESP, EBX BE7:MOV[EBP-16], EDX BEA: L1:MOVEDX, [EBP-16] BED:MOVEDI, [EBP-12] ;;; [4] (LOOP FOR I FROM ...) BF0:CALL #x10000460 BF5:MOVESP, EBX BF7:CMPEDX, #x2800000B; NIL BFD:JEQL0 BFF:MOVEDX, [EBP-20] C02:MOVECX, [EBP-8] C05:MOVEAX, [EBP-4] C08:ADDECX, 2 C0B:MOVESP, EBP C0D:MOVEBP, EAX C0F:JMPECX C11:NOP C12:NOP C13:NOP C14:NOP C15:NOP C16:NOP C17:NOP C18: L2:BREAK 10; Error trap C1A:BYTE #x02 C1B:BYTE #x19; INVALID-ARGUMENT-COUNT-ERROR C1C:BYTE #x4D; ECXNote that disassemble is a standard feature. Here's how it works in CLISP, a bytecode-interpreting implementation:
[1]> (defun sum (n) (loop for i from 1 to n sum i)) SUM [2]> (disassemble #'sum) Disassembly of function SUM (CONST 0) = 0 (CONST 1) = 1 1 required arguments 0 optional arguments No rest parameter No keyword parameters 0(CONST&PUSH 0); 0 1(CONST&PUSH 1); 1 2(JMP L12) 4L4 4(LOAD&PUSH 1) 5(LOAD&PUSH 1) 6(CALLSR&STORE 2 54 1); + 10(LOAD&INC&STORE 0) 12L12 12(LOAD&PUSH 0) 13(LOAD&PUSH 4) 14(CALLSR&JMPIFNOT 1 49 L4); > 18(LOAD 1) 19(SKIP&RET 4) #<COMPILED-CLOSURE SUM>CommonLisp ain't for QuicheEaters?!
By the way, if anyone's looking at the disassembled x86 machine code for the SUM function above and thinking "yeccch, that's ten times worse than it would be in C", here's what you get if you throw in some type declarations (like you need all the time in C) and tell the compiler to prefer speed over safety (like you get all the time in C).
48081060:.ENTRY SUM(n); (FUNCTION (FIXNUM) FIXNUM) 78:POPDWORD PTR [EBP-8] 7B:LEAESP, [EBP-32] 7E:MOVEAX, 4 ; No-arg-parsing entry point 83:XORECX, ECX 85:JMPL1 87: L0:ADDECX, EAX 89:ADDEAX, 4 8C: L1:CMPEAX, EDX 8E:JLEL0 90:MOVEDX, ECX 92:MOVECX, [EBP-8] 95:MOVEAX, [EBP-4] 98:ADDECX, 2 9B:MOVESP, EBP 9D:MOVEBP, EAX 9F:JMPECX A1:NOP A2:NOP A3:NOP A4:NOP ;;; [7] (LOOP FOR I FIXNUM ...) A5:NOP A6:NOP A7:NOPOf course, a really smart compiler could produce
480C65D0:.ENTRY SUM(n); (FUNCTION (FIXNUM) (SIGNED-BYTE 29)) 5E8:POPDWORD PTR [EBP-8] 5EB:LEAESP, [EBP-32] 5EE:LEAEAX, [EDX+4]; No-arg-parsing entry point 5F1:SAREDX, 2 5F4:IMUL EDX, EAX 5F7:SAREDX, 1 5F9:ANDEDX, 4294967292 5FF:MOVECX, [EBP-8] 602:MOVEAX, [EBP-4] 605:ADDECX, 2 608:MOVESP, EBP 60A:MOVEBP, EAX 60C:JMPECX 60E:NOP 60F:NOPinstead ...
I saw the title and initially envisioned something that looked like this :-)
(EAX (JMP (SAR EPX NOP) ESP LEA) MOV (ADD EBP)) JLE EBP)