This can be roughly divided into three categories:
1) write UnitTest code directly in your class. In Java, you can use the main method for this. Have a method that executes every other method in that class and asserts the results. Do not unleash the code until it passes all the tests you can think of in your unit test method, which is a minimum of one test for each method.
2) perform SystemTests with a realistic version of the entire application in a staging area. This means that fewer bugs resulting from abberant environment issues will crop up when moved into production, in addition to code bugs.
3) conceive of a regression test framework for the entire system that emphasizes push-button execution for everyone in the staging area. Adding new testcases should be easy. I've written frameworks that employ plain ASCII text that's interpreted into a running TestCase objects. I've also written test frameworks that use XML for the testcase config. There should be a 1 to 1 relationship between reports of runs and testcases. 200 tests = 200 reports, and one single summary report. It's exhilarating to see reports that factually show you've made progress. (and you know you're not just fooling yourself)
Number 3 is probably most important. With a system like this (and very smart, hardworking co-workers) I've been able to release a system of 80,000 LOC more than once per day, not just once per month. This is a key.
But what if the test code is buggy? should I write test code for the test code? FOR GOD SAKES WHEN DOES IT END!?!
In practice, you can usually tell if there's a bug in your tests when code that you think is broken passes the tests, or code you think works fails the tests. One of the two is wrong, so you debug to figure out which. It's like the complementary strands of DNA: If there's a piece missing from one strand, it can be reconstructed from the other strand. And in the end, you won't find all the bugs anyway, only increase your confidence that there are fewer bugs than untested code.