Run your problem at different size points n1, n2; measure its run times t1, t2; calculate a=log(t2/t1) / log(n2/n1). 'a' is an empirical local order of growth, t ~ n^a (empirical i.e. we've actually measured it; local i.e. for the given range of problem sizes 'n1...n2').
The basic assumption being that we'd always get similar values for the same size ranges, representative of the true algorithmic nature of the code, regardless of testing hardware and/or implementational details, etc..