Is Embedded Behind

Is EmbeddedSystems technology behind a generation with regards to allegedly higher or newer abstractions such as OOP, garbage collection, and databases? Is this necessary in order to have finer control over performance characteristics? Will higher abstractions necessarily (by definition) remove one from hardware-centric concerns such that this will always be the case? Or perhaps compromises will be made such that embedded apps will be divided into "sensitive" portions and non-sensitive portions? For example, a missile may have an Oracle RDBMS in it for "helper" utilities, but the final decisions are made by something more in direct contract with the chips. Such a system may give an Oracle-based subsystem a fixed amount of time to see if it cannot find a better answer to some navigational or decoy detection puzzle. But if the Oracle system does not find the answer in time, then the root controller makes it own default determination.


Yes, they are (at least in automotive). We generally program in C, sometimes with assembler mixed in. We usually don't even use the standard C library. There are some definite reasons for it, though:

OOP: It's very easy to make something that looks simple, but takes too long to run. Also, most OOP uses a dynamic memory allocation, which leads to unpredictable out of memory conditions.

GC: The runtime is too unpredictable. Everything is generally statically allocated (except on the stack). Most deeply embedded software doesn't use malloc or recursion.

Databases: We can't afford enough memory for this. The largest automotive controllers are about 512K to 1M of Flash ROM, and 10K to 32K of RAM. The more typical size is about 32K to 64K Flash ROM and 1K to 4K of RAM.

Update for 2009: some of the larger controllers are about 2M Flash / 96K RAM. Some controllers are very small, e.g. 2K Flash / 256 bytes RAM.


A missile with Oracle in it? Now that would give new meaning to the phrase "big iron".


Keep in mind that EmbeddedSystems is a wide universe, encompassing devices with orders of magnitude differences in CPU, memory, etc (as well as interactivity requirements, reliability/robustness requirements, MissionCritical and RealTime requirements, etc.) Devices such as toasters, cell phones, high-end instrumentation, PDAs, pacemakers, nuclear weapons controllers, automotive powertrains, and factory automation systems all qualify (or contain subsystems which qualify) as "embedded systems". What is appropriate engineering practice for a toaster using a 4-bit micro to avoid burning the toast; is decidedly not appropriate for a BlackBerry or for a missile guidance system.

Certainly, many advanced technologies found in PC applications and server-side components are unavailable to the EmbeddedSystems developer. (And, technologies appropriate to the PalmPilot are not appropriate to an engine controller). However, to say that EmbeddedSystems are "behind the times" is unnecessarily pejorative; embedded systems programmers often reject technology considered good practice in the desktop/server world for very good and legitimate reasons. It certainly is not because we're unaware of such technologies, or because we're cowboys who don't like high-level design techniques.

I am not sure what you are saying here. Are you saying that it is NOT hardware or timing constraints that keep embedders from using more OOP, GC, and databases? If so, then why do their decisions differ from domains where OOP/GC/databases are common?

In many cases, it is such constraints; however my caution is to not treat embedded as a uniform category. Many specific constraints are well-known, I will outline some of them here.

That said, some of the techniques you mention are found in embedded systems today. OO is quite common--C++ is commonly used in embedded systems, and many such systems also use Java and the like. GarbageCollection is found in systems where it is appropriate (and of course, the TektronixElevenKayScope is famous for being the first production embedded system--that I'm aware of--with a GarbageCollector in it). Full blown RDBMS's aren't appropriate for numerous reasons (besides computing and storage resources); but many systems include NimbleDatabases; higher-end embedded systems are also fertile ground for a RollYourOwnDatabase. I've written one such creature in my career, I'm afraid to admit. :)

Unfortunately, there is a trend among academics and/or developers of PC/server software to assume that higher-level technologies are avoided in embedded systems due to laziness, ignorance, stubbornness, or a "cowboy mentality" among embedded systems developers. While these traits are certainly found (as they are in all programming subcultures), there are quite good reasons for how we do things. (And of course, we do have our share of cowboys who will proclaim that "C++, etc. is too slow" without bothering to consider the requirements of the system to be designed).

In most (if not all) cases, decisions on how to design embedded systems are best made by experienced embedded systems developers.

--ScottJohnson

A good example of how modern concepts are incompatible with real-time programming is RavenScarJava?. This is a RealTime Java platform that has a very simple solution to the problem - the system operates in 2 modes, a traditional setup phase, and a RealTime phase. In RealTime mode, the garbage collector is gone. Completely. Objects are neither created nor destroyed at realtime. Everything's static. And this is Java - all the high level functionality was there, and they had to throw it out because it wasn't deterministic.

That being said, there must be an OOP model that can be developed that is compatible with real-time needs for determinism. I often thought that a system involving using only AutoPtrs and WeakReferences would be a good sutbstitute for garbage collection without sacrificing usability too much.

-- MartinZarate

I did not mean to belittle embedded programmers. Dealing with tight constraints is hard work that requires skills. I am only saying that such constraints limits use of certain tools that are often considered "higher-level" tools. This changes the kinds of skills and tools needed, but does not necessarily reduce the total skills needed (if measurable). We don't need more DomainPissingMatches. For example, a professional world-wide trip planner does not have to worry about how to land a plane on a beat-up jungle air-strip other than perhaps probability-based contingency issues (insurance, back-up contacts, etc.) It takes great skill to land a plane on bad run-ways, but it is a different focus than what the trip planner tends to worry about. --top

You betcha. The thing we're trying to say here is that embedded folk are concerned with making a system perform within its limitations. To that end we forego such niceties as, say, an operating system. Forget about software component reuseability -- in all likelihood the very electronics hardware on which the current product is built won't even be available two years from now. For durn sure the client will change his "mind" about how the UI should work or what lights flash or what bells ring.

Embedded software is a component of embedded solutions, which means we are typically involved in figuring out the electronics, the electromechanicals, and a whole pile of other considerations. The software is just one aspect of it. In that respect we are a lot higher level than even the trip planner mentioned above. Embedded folk have to plan the trip, plan all the midair refuelling, the emergency bailout procedures, the alternative landing sites, the engine selection, the GPS coverage...YaddaYaddaYadda. We do it all.


Deterministic Systems

One of the watchwords in EmbeddedSystems is deterministic. We're dealing with this concept right now in designing the network that will support the gadget farm that monitors a modern casino. In a nutshell, Ethernet+TCP/IP doesn't get the job done reliably enough to be used in this context. The variance between its best and worst cases is such a wide swing that it is too non-deterministic for the application.

We have a proprietary protocol running on a low-end ARM platform under a not-really-an-OS of our own crafting that outperforms Ethernet+TCP/IP by an order of magnitude and has tight reliability characteristics.

Embedding isn't just about cheap components that can be mass-manufactured into small spaces for convenience and/or portability. It's also about predictability across a broad spectrum of conditions. Antilock brakes have to always work, not just on wet roads or ice or on weekdays. TerminateWithErrorMessage? is just not going to work for a braking system. Nor an ejection seat or parachute. Elevators may have acceptable failure modes, but fly-by-wire systems on a commercial jet liner don't.

On the desktop, periods of "laggy" performance are a nuisance, in a warehouse conveyor-and-pick system such a thing can have results ranging from comical to catastrophic.

Complexity works directly against determinism in a system, so in systems where determinism is more important than peak performance, simpler (even "obsolete") components are often (usually?) the right way to go.

In an interview some years ago, a NASA engineer was asked why they were still using '486 chips in their orbiting applications when the Pentiums were so much faster. His answer was that the '486 was plenty fast enough for everything they were doing, had a well-established track record (more than a decade of proven reliability) and, after all, they weren't running Windows.

If you just let the CPU do its job, you'll find that a Z80/Z8, 80x51, '386/'486, or ARM can cope with quite a lot of event traffic reliably and within predictable timing requirement. Could you use a Pentium? Sure. Of course you want one that has an established record of completely accurate computation. Accurate, reliable floating point might be nice. So, maybe not yet for the Pentium.

You need a gizmo that's enough for the job but not so capacious that one is seduced by the reluctance to "waste" the extra horsepower, resulting in additional tasks to the point where determinism is lost. We've been down that road, too. Witness our 400 MHz gadget that clearly had enough power to spare that we could implement multi-media transport and presentation -- ForFree -- and still get the job done! In the end, the engineering needed to get the basic job done in this context was so steep that the primary use of the gadget changed to (surprise!) multi-media, and a different (66 MHz) gadget was built to do the original job. Reliably.

Determinism, grasshopper, not power, is the key. -- GarryHamilton

Concur. Some of the medical devices I have worked on used real time OSes that had guaranteed determinism, such as pSOS and ThreadX. (Of course, pSOS is now defunct, having been absorbed into the Wind River megalith. Sad.) We had to run all kinds of tests to prove that the interrupt latency was within limits, the task switching times within limits, yada yada. It's nice to have an OS do some of the grunt work for you, but it durn well better hold up its end when it comes to determinism, or you end up with more headaches than the OS is worth. That is why none of the PID machine controllers I ever did had an OS backing them up; they were all fixed-period task dispatchers with deterministic task execution times.


See: AreRdbmsSlow, EmbeddedSystem


EditText of this page (last edited April 7, 2012) or FindPage with title or text search