ConvertSpacesToTabsNotForCode!!!
TODO: sort by name
...inspired by HelloWorldInManyProgrammingLanguages and WillemBogaerts' posting at http://groups.yahoo.com/group/extremeprogramming/message/83146
WillemBogaerts: "[T]he programming languages we use also force our minds towards ways of thinking. For me, LISP is far more the way I think than the "standard" languages like Basic, java, etc. Lisp is halfway to a functional language. It is as imperative as java or basic, but the line of thoughts of functional language can be used also."
To give you an example of the difference in thoughts:
Suppose you want to program a dot product of two vectors.
The "standard" way of thoughts is: "Oh, I need a variable with a running sum, then I need an index to run over the number of elements of the vectors, I write a loop where I add the products for every element to the running sum, after I initialized it to 0." So all tiny steps that get way out of focus of the problem to be solved.
The "functional" way of thoughts is: "I want the sum of the elementwise multiplication of the two vectors." So it is more like the formal problem definition.
This also leads to a huge difference in code:
(in C)
/* Assume that a and b point to array with at least length elements */ /* Assume that none of the intermediate values overflows a double. */ ' double dotProduct(double *a, double *b, int length) { double runningSum = 0; for (int index = 0; index < length; index++) runningSum += a[index] * b[index]; return runningSum; }(in LispLanguage)
;; assume that a and b are the same length. ;; assume that corresponding elements of a and b can be multiplied together. ;; (defun dot-product (a b) (reduce '+ (mapcar '* a b)))(The original code here used "apply" instead of "reduce", but that may hit a limit on the number of arguments.)
The name of the lisp function "mapcar" is not very descriptive, but it means something like "perform elementwise".
However, you'd end up consing the extra lists (returned from MAPCAR) so, in a production system, most lispers would use a straightforward, non-functional, loop and collect:
;; assume that a and b are the same length. ;; assume that corresponding elements of a and b can be multiplied together. ;; (defun dot-product (a b) (loop for x across a for y across b summing (* a b)))Also, the above would probably have declarations on the types of vectors being multiplied. And you'd like to check that they're of congruent dimensions (not done in above code).
Or, if they already included the SERIES package, they might use that; it provides macros which transforms applicative style coding into loops transparently. -- SmugLispWeenie
Bear with me here - I'm a relative Lisp newbie, I've never tried this mechanism before, and I realise it's a contrived example. But I just recognised a pattern: we have two versions of a program, one being a little prettier but less efficient than the other, but both obviously equivalent (datatype aside). Might it not be fun to write a CompilerMacro? to teach the Lisp compiler how to translate the pretty version into the efficient one automatically? Here's one I hacked up for learning purposes - benchmarking indicates that it does indeed make the pretty version translate into the efficient code. Great fun!
(define-compiler-macro apply (&whole exp operator &rest args) "Optimize (APPLY '+ (MAPCAR '* ....)) into an equivalent loop." (if (and (equal operator ''+) (equal (caar args) 'mapcar) (equal (cadar args) ''*)) (make-sum-of-products (nthcdr 2 (car args))) exp)) (defun make-sum-of-products (list-exps) "Return code to sum the element-wise products of LIST-EXPS." (let ((vars '())) (flet ((for-clause (list-exp) (push (gensym) vars) `(for ,(car vars) in ,list-exp))) `(loop ,@(mapcan #'for-clause list-exps) summing (* ,@vars)))))It seems straight-forward to generalise this to handle similar operations. I bet there're a million caveats, but, is Lisp fun or what? :-)
Also, even in a production system, the EightyTwentyRule applies to performance. Some code has to be fast, but most doesn't. If you know which is which, you get to write cute code most of the time.
No. changing APPLY would be a spectacularly bad thing to do. But, as I said, if you want code which transforms an applicative-style of coding into an iterative one (transparently), check out the SERIES package, by Waters.
I suspected "advising" APPLY would be somehow bad. But why? -- LukeGorrie (who's reading about SERIES..)
Assuming VisualBasicNine or greater, the most generic implementation looks like this:
Function dotProduct(ByVal a As IEnumerable(Of Double), ByVal b As IEnumerable(Of Double)) Return a.Zip(Function(a, b) a * b).Sum() End Function
SteveHowell: Here's the test-first Python way of looking at the dot product problem:
def test(): assert sum([3,4,5]) == 12 assert ElementWiseMultiplication([3,2], [4,5]) == [12,10] assert DotProduct([1,2], [3,-4]) == -5 def ElementWiseMultiplication(list1, list2): return [ list1[i] * list2[i] for i in range(len(list1))] def DotProduct(list1, list2): return sum(ElementWiseMultiplication(list1, list2))It's a bit longer than the lisp solution, but it fits my brain a bit better. YMMV.
The Python implementation need not be longer:
def dot_product(a, b): return reduce(lambda sum, p: sum + p[0]*p[1], zip(a,b), 0)This is basically the same as the short LISP implementation above. Perhaps Python really almost is LISP with InfixNotation: http://www.norvig.com/python-lisp.html
SteveHowell's version can be reduced to the even shorter:
def dot_product(a, b): return sum([a[i]*b[i] for i in range(len(a))])However the most Pythonic way is clean, lazy, and works on any pair of iterables (if you don't need LazyEvaluation, use zip rather than izip):
from itertools import izip def dot_product(a, b): return sum(x*y for (x,y) in izip(a,b))Or a more FP style:
from itertools import imap from operator import mul def dot_product(a, b): return sum(imap(mul, a, b))Now it looks even more like Lisp.
Yes, my mileage varies greatly. The LISP solution is beautifully elegant.
Since we're at it, here it is in Smalltalk:
dotProductOf: a and: b ^(a with: b collect: [:eachA :eachB | eachA * eachB]) inject: 0 into: [:sum :each | sum + each]or, as an extention to Float or Number:
dotProductOf: aNum ^(self with: aNum collect: [:a :b | a * b]) inject: 0 into: [:sum :each | sum + each]allowing
aNum dotProductOf: anotherNum
map2 f = (map (uncurry f) .) . zip dotProduct = (sum .) . map2 (*)or, without trying to look too smart and incomprehensible:
dotProduct :: Num a => [a] -> [a] -> a dotProduct xs ys = sum (zipWith (*) xs ys)or maybe
dotProduct = (sum .) . zipWith (*)which is about as terse as possible, although the partially applied (.) goes as incomprehensible or as idiomatic, depending on experience. ;-)
or
dot [][]= 0 dot (x:xs) (y:ys) = x*y + dot xs yswhich is readable.
$sum += $a[$_] * $b[$_] for 0 .. $#a;
#include <vector> #include <algorithm> template <typename T> T DotProduct(std::vector<T> const &v1, std::vector<T> const &v2) { T result = 0; for (unsigned i = 0; i < std::min(v1.size(), v2.size()); ++i) result += v1[i] * v2[i]; return result; }This code should work for any type T for which the binary operator '*' is defined. The for-loop could also be implemented using for_each<>(), but this would result in longer code, I think.
In standard C++ spirit you'd probably use iterators to separate the dot product function from a specific container. Here's the actual implementation of dot-product from VC++ (without user-supplied predicates). It's called inner_product in the C++ standard. This version doesn't handle unequal length vectors like the version above. Note the inconvenient parameter _V, necessary to drive the type system for the return type.
template<class _II1, class _II2, class _Ty> inline _Ty inner_product(_II1 _F, _II1 _L, _II2 _X, _Ty _V) { for (; _F != _L; ++_F, ++_X) _V = _V + *_F * *_X; return (_V); }
CeePlusPlus 0x:
With variadic templates, we can write the ErlangLanguage version for compile-time, if at a somewhat high syntactic cost.
template <int... Args> struct Vector {}; template <typename T, typename U> struct DotProduct; template <> struct DotProduct<Vector<>, Vector<>> { static int const value = 0; }; template <int A, int... As, int B, int... Bs> struct DotProduct<Vector<A,As...>, Vector<B,Bs...>> { static int const value = (A*B) + DotProduct<Vector<As...>, Vector<Bs...>>::value; }; #include <iostream> int main() { std::cout << DotProduct<Vector<1, 2>, Vector<3, -4>>::value << std::endl; }
The LINQ operators have Zip, Map (Select) and Reduce (Aggregate). You can combine these operators like this:
static double DotProduct(IEnumerable<double> v1, IEnumerable<double> v2) { return v1.Zip(v2, (a,b) => a * b).Sum(); }
ErlangLanguage (don't need no steenkin' HigherOrderFunctions):
dot_product([A|As], [B|Bs]) -> (A*B) + dot_product(As, Bs); dot_product([],[])-> 0.Would it be correct for DotProduct that if the vectors are of different lengths you only use a prefix of the longer one (as opposed to e.g. triggering an error)? Or are the others like this just because it made the original Lisp version prettier? :-) -- LukeGorrie
No it isn't correct. Inner products (of which the usual 'dot product' is one), are defined on inner-product spaces, which are vector spaces which have (unsurprisingly) an inner product. This may or may not be a Hilbert space (i.e., have a norm). In any case, the operation is defined on vectors in a vector space. Vector spaces are defined on a scalar Field, so strictly speaking we should also verify that the scalars live in the same space.
Depending how you want to work with the numeric tower in your Lisp, you could do leave the job of constraining the scalars to the '*' operator, and just do:
(defun dot-product (v1 v2) (assert (= (length v1) (length v2))) (loop for x across v1 for y across v2 summing (* x y)))Or you could strongly constrain the v1, v2 to some space you are working with, e.g. (check-type v1 (simple-array double-float (3)), eg, an optimized version for a declarations-as-assertions implementions working in double precision euclidean space might look like, venturing into the not-so-elegant world of productionish code that does these checks and works on a more concrete representation (simple arrays of length 3 holding doubles), which may (unfortunately) be needed for efficiency purposes in many of the sorts of things you actually use euclidean 3-vectors for. This example avoids the loop macro, to show a different approach.
(defun dot-product (v1 v2) (declare (type (simple-array double-float (3)) v1 v2) (optimize speed)) (assert (= (length v1) (length v2))) (do ((n 0 (1+ n)) (res 0.0d0)) ((>= n (length v1)) res) (declare (type double-float res) (type fixnum n)) (incf res (* (aref v1 n) (aref v2 n)))))So this version looks much like it would in any imperative language.... The nice thing is that you can easily move from abstract & slow to concrete and fast as you develop in Lisp. Most importantly, the restriction to double-floats can be sensibly relaxed within the numeric tower. Also note that this sort of low-level performance can often be best handled by compiler-macros etc, as someone mentioned above. No need to dirty most of the code like this! The length and domain checks should be there in the beginning though.
I updated the Erlang version to signal an error if the lists aren't the same lengths. It's still defined for zero-length lists, is that right? What I was fishing for here is a way to show that with pattern-matching it's easy to handle these sort of constraints neatly. -- LukeGorrie (who would switch to ML (or Python!) rather than write such gross Lisp code as that ;-))
Actually, you caught me typing late at night :) From the above description, you should see that an inner product needs a scalar Field, so zero-length tuples are out. Actually, length 1 tuples don't make sense either (then you are just operating in the scalar Field itself. The problem with this sort of thing is that an idea like the 'dot-product' generalizes simply algorithmically that it is easy/tempting to write general code (much like we have notation <a,b> which generalizes quite well). Mathematically, the operation only only make sense when applied to elements 'from the same place'. On paper, this means that you have to be careful you haven't written some nonsense if you are working with multiple inner product spaces. In the computer, something else should enforce this discipline. As long as you aren't after raw speed (think numerical simulations), you would stay with the elegant version. Python certainly isn't fast enough for that (without punting to C code, which you could do in other languages as well, I don't know about ML variants). There is nothing wrong with doing something like:
(defun dot-product (v1 v2) (assert (and (= (length v1) (length v2)) (/= 0 1 (length v1)))) (reduce #'+ (mapcar #'* v1 v2)))If you don't mind the consing. On the other hand, if you do work with different spaces regularly, it may be cleanest to use CLOS to designate your inner product spaces...
On Python speed for dot products: One might be surprised. I have a Python information retrieval application which does many dot products regularly, and the dot products are hardly a bottleneck. As always, speed depends upon the application, and in this case, I get to have my elegance and use it too. :)
Well one might be surprised, but I wouldn't. In the applications I am thinking of above, I would expect something like a million dot-products per time slice. So we are looking at billions-to-trillions of individual dot products. How many are you talking about? The python version above isn't nearly as elegant as the lisp version, and probably isn't faster (I don't know enough about pythons internals to be sure of this). Walking along lists in lisp is very fast. For numerical work, the problem is that you just can't afford any temporary storage --- each dot product (on a suitable domain) *must* boil down to a few assembly instructions (preferably unrolled, if you are dealing with short vectors).
I think we all agree here: it depends on the application. We just have different applications. It's not surprise that those of us whose applications don't require cycle-squeezing wince a bit at the heavily optimized code :-) and vice-versa. Absolutely. One nice thing about lisp is the ability to get both, without ForeignFunctionInterface kludges etc; I don't claim lisp is unique in that regard.Although to be fair, sometimes the only practical approach is to revert to a hand-tuned assembly library. That can't be anything but yucky.
TI320C6x AssemblyLanguage
I'm including this link because dot-product is the kind of algorithm that's often handcrafted in DSP assembly.
ftp://ftp.ti.com/pub/tms320bbs/c62xfiles/dotprod.asm (BrokenLink 20070703)
PrologLanguage (similar to Erlang):
dot_product([],[],0). dot_product([A|As],[B|Bs],D) :- dot_product(As,Bs,D1), D is (A*B)+D1.
function DotProduct(a, b : array of double): double; var i : integer; begin Result := 0; for i := max(low(a), low(b)) to min(high(a), high(b)) do Result := result + a[i] * b[i]; end;
If in same table:
select sum(a * b) from tIf in different tables:
select sum(a * b) from t,u where t.index = u.index
A rotten little routine
: @+ ( a -- a+ a@ ) dup cell+ swap @ ; : .* ( v1 v2 len -- p ) 0 >r begin ?dup while rot rot @+ rot @+ rot * r> + >r rot 1- repeat 2drop r> ; : m.* ( v1 v2 len -- d ) \ double cell result 0. 2>r begin ?dup while rot rot @+ rot @+ rot m* 2r> d+ 2>r rot 1- repeat 2drop 2r> ;May I ask why your using a while loop to simulate a for loop? Is it to make it even more roten?
or recursively...
: dot-prod ( v1 v2 len -- p ) 1- ?dup 0= if @ swap @ * else rot dup @ >r cell+ rot dup @ >r cell+ rot recurse 2r> * + then ;or to define dot-products for particular vector lengths...
: *step ( v1 v2 -- p v1+ v2+ ) over @ over @ * rot cell+ rot cell+ ; : make-N.*: ( n "name" -- ) 1- >r : r@ 0 do postpone *step loop postpone @ postpone swap postpone @ postpone * r> 0 do postpone + loop postpone ; ; 3 make-N.*: 3.* ( v1 v2 -- p ) \ where v1,v2 point to vectors of length 3 see 3.*\ gforth : 3.* *step *step @ swap @ * + + ; \ tests create v1 1 , 2 , 3 , 4 , 5 , create v2 6 , 7 , 8 , 9 , 10 , v1 v2 3 .* .\ 44 v1 v2 4 m.* . .\ 0 80 v1 v2 5 dot-prod . \ 130 v1 v2 3.* .\ 44There is a better way to do this if your Forth has A & B registers:
: *+ ( A:v1 B:v2 ( n -- n! ) @B+ @A+ * + ; \ ?EXIT is equivalent to IF EXIT THEN : .* ( v1 v2 len -- p ) -ROT >B >A 0 SWAP FOR *+ NEXT ; : make-N.*: ( n "name" -- ) >R : POSTPONE >B POSTPONE >A 0 POSTPONE LITERAL R> FOR POSTPONE *+ NEXT POSTPONE ; ; 3 make-N.*: 3.* ( v1 v2 -- p ) \ where v1 and v2 are vectors of length 3 \ Compiles this: : 3.* ( v1 v2 -- p ) >B >A 0 *+ *+ *+ ;If your Forth hasn't A & B registers then you can simulate them as user variables, and it shouldn't be much slower. This is cleaner and faster. -- Michael Morris
A +.x BJayLanguage:
A +/ .* BKayLanguage:
+/ A * B
def dot_product l1, l2 l1.zip(l2).inject(0) { |sum,els| sum+els[0]*els[1] } endor
def dot_product l1, l2 l1.zip(l2).map { |a,b| a*b }.inject {|sum,el| sum+el} endor
def dot_product l1, l2 sum=0 for a,b in l1.zip(l2) sum+= a*b end sum endBut I'm not an expert, so there may be better ways -- GabrieleRenzi
Without resorting to intermediate arrays:
def dot_product l1, l2 sum=0 l1.zip(l2){|a, b| sum+=a*b} sum end Gotta love ruby blocks! -- Martin Jansson
function dotproduct(a,b) { var n = 0, lim = Math.min(a.length,b.length); for (var i = 0; i < lim; i++) n += a[i] * b[i]; return n; } assert( dotproduct([1,2,3,4,5], [6,7,8,9,10]) == 130 )
on min(a, b) if a < b then return a return b end min on dot_product(arr1, arr2) set sum to 0 repeat with i from 1 to min(number of items in arr1, number of items in arr2) set sum to sum + (item i of arr1) * (item i of arr2) end repeat return sum end dot_product set a1 to {1, 2, 3, 4, 5} set a2 to {6, 7, 8, 9, 10} set dp to dot_product(a1, a2) -- should be 130
Lines starting with a dash contain the code. Other lines have been printed by the compiler.
Standard ML of New Jersey v110.57 [built: Fri Feb 10 21:37:49 2006] - val dotRealLists = ListPair.foldlEq (fn (x, y, s) => s + x * y) 0.0 ; [autoloading] [library $SMLNJ-BASIS/basis.cm is stable] [autoloading done] val dotRealLists = fn : real list * real list -> real - dotRealLists ([1.0, 0.0, 0.5], [0.5, 1.0, 1.0]) ; val it = 1.0 : realor shorter:
val dotRealLists = ListPair.foldlEq Real.*+ 0.0
For integers:
let dotProductInts xs ys = List.fold_left (+) 0 (List.map2 ( * ) xs ys)For floats:
let dotProductFloats xs ys = List.fold_left (+.) 0.0 (List.map2 ( *.) xs ys)"List.map2" is the equivalent of Haskell's "zipWith". OCaml does not come with a standard "sum" function, so we had to do a fold explicitly. Extra spacing must be used when putting the multiplication operators inside parentheses for using them in non-infix style, because they contain asterisks, and "(*" and "*)" starts and ends comments in OCaml, respectively. In OCaml, the arithmetic operators for the different numeric types are different, and there are no generic operators for all numeric types, probably for reasons of speed, so the function for each type will have to be written separately. It can be modified to suit other numeric types trivially.
It can be written shorter this way:
let dotProductInts = List.fold_left2 (fun s x y -> s + x * y) 0 let dotProductFloats = List.fold_left2 (fun s x y -> s +. x *. y) 0.0
$dot_product = array_sum(array_map('bcmul', $array1, $array2));This is kind of a hack, in that we are using "bcmul" the BC arbitrary-precision multiplication function, because we want a multiplication function, but we can't just tell it to use "*".
$dot_product = array_sum(array_map(create_function('$a, $b', 'return $a * $b;'), $array1, $array2));This creates a multiplication function on the fly, but is ugly and verbose.
PHP 5.3 has decent lambda functions; not quote as verbose as create_function, and a lot more useful.
$dot_product = array_sum(array_map(function($a,$b) { return $a*$b; }, $array1, $array2));Using an idiom that's similar to FP's zip:
$dot_product = array_sum(array_map('array_product', array_map(null, $array1, $array2)));
For a little more of a challenge how about HessianInMultipleProgrammingLanguages? see http://mathworld.wolfram.com/Hessian.html (used a lot in MathematicalEconomics?) It takes the Jacobian which itself is the derivative of a dot product (above) and cross product (not much more complex), of differentials of a function. The tricky part would be the differentiation steps. But many contend the SymbolicProgramming distinction is illusory. Algorithms for the Hessian would see if some languages allow this to be expressed easier. I.e. hessian(f) where f is a polynomial (to keep it manageable) this function should do all the differentiation steps (input should not already be differentiated). Output is a matrix (vector of vectors) assume no simplification is needed but could be PrettyPrinted.
If you were putting the DotProduct method in an OO program should it belong to the Vector class or an Operation or Utility class according to the VisitorPattern? In other words would you do
Vector v = (1,2,3) Vector u = (4,5,6) v.DotProduct(u)or
Vector v = (1,2,3) Vector u = (4,5,6) VectorOperation? o o.DotProduct(u,v)? What if you needed to include many operations such as ScalarMultiplication?, CrossProduct etc?
Methods shouldn't belong to classes at all! Then you don't have this sort of confusion.
From the UCB Logo manual:
to dotprod :a :b; vector dot product op apply "sum (map "product :a :b) end
def dotProduct(as: Seq[Double], bs: Seq[Double]) = { require(as.size == bs.size) (0.0 /: (for ((a, b) <- as.elements zip bs.elements) yield a * b)) (_ + _) }Scala 2.8 has zip and sum for all collection types so:
def dotProduct[T <% Double](as: Iterable[T], bs: Iterable[T]) = { require(as.size == bs.size) (for ((a, b) <- as zip bs) yield a * b) sum }
Also see ArraySumInManyProgrammingLanguages, CounterInManyProgrammingLanguages