Of Mice and Monocultures

Ever since the early 90ies we had to witness the decline of a thrieving subculture. Back in the 80ies there was such a thing as a computer culture with many platforms to choose from (including emotional fights regarding which platform was best), be it C64 vs. ZX Spectrum vs. CPC or later on Amiga vs. Atari ST vs. Mac vs. Acorn Archimedes. Yes, there were PCs in those days, quite a lot of them, actually, but they didn't dominate the scene to anywhere near the extent they do today, nor was Microsoft (MS) as big. How did it happen? Not because PCs were the best hardware or because Windows was a good OS (or an OS in the first place), it happened because MS's advertising succeeded in attracting people to buy computers who didn't know the first thing about them, and that's what pushed the PC and MS along with it to critical mass: it didn't matter whether the system was good or bad, it was the only system Joe Public knew about, so it was the only system Joe Public bought. Chain reaction.

Even that wouldn't account for the dominance of MS in today's world, in particular since over time computer users will come into contact with alternatives to Windows which will be much more attractive to them. MS's way of forcing people to stay with Windows is to a large extent their office formats. Often called "industry standards", these are anything but standards:

  1. a standard must have a publicly available specification -- MS's office formats are proprietary and
  2. a standard must be reasonably stable and backwards-compatible -- MS's office formats are rarely backwards compatible, actually you may rest assured that what you write in Word today will be just another bit of electronic garbage in 5 years' time. How this pathetic toy can be mistaken for a serious document processing system is utterly beyond me.
These two points instantly disqualify the office formats as standards while they show at the same time how MS hold people in their death grip: because their formats are proprietary, the best rival software houses can do is reverse-engineer documents they get access to, as there is no freely available library for things like Word or PowerPoint files. As soon as anyone but MS makes a decent job of reading the latest office format, its gets changed again ("we had to change it to add this killer feature, dude") and the game begins anew. Rival companies will always be one step behind, so people who need to read the latest office formats (for instance because they are sent documents by business contacts) only have one choice: the latest MS office, which will usually only run on the latest Windows, so that's two upgrades people have to buy because they were stupid enough to trust their business data to these office formats in the first place and by staying with these formats also force their associates into the same neverending upgrade routine. All it takes is a certain number of morons insisting in using office formats for data exchange and quickly large parts of the commercial software world are forced to buy the office suite from MS or lose business. Not because the office suite was good, mind you, but merely because nothing else can read its file formats. This has always been MS' little game: corrupting open standards (just look what they did to HTML or PostScript) and establishing their own proprietary formats as pseudo standards by sheer volume. Because open standards are MS' most lethal foe. See the pattern emerging? Unfortunately lawyers don't understand this system, which is funny because most of them are using the office formats and should be well aware of the fact that they have to upgrade frequently. Still, all they could come up with is splitting the company (which may have worked with AT&T but sure as hell will not work with MS), and forcing them to give Windows to schools for free (oh how Bill must have laughed at the clueless little sods!). Want to know how to destroy MS' monopoly? Force their office formats into real standards, i.e. an open specification controlled by an independent consortium such as IEEE, plus an independent (ideally free) reference library for reading and writing these formats (as exists for most image formats such as JPEG, PNG, TIFF etc.). In order to prevent the same situation from happening again in the future, the whole thing should be formulated as a generic law: any data format with a market share exceeding x percent will automatically become an open standard under the above mentioned conditions. Once MS can no longer keep the format a secret nor change it at will, rival companies will take care of the rest, giving users the free choice for the first time rather than having to compromise. There's only one more efficient way to deal with MS and that would be an H-Bomb on their headquarters. H-Bombs are rather expensive, though.

There is one upside to this "PC revolution" and that is that hardware prices dropped dramatically (at the beginning of the 90ies a reasonably up-to-date computer would typically cost around 2500 Euro including a crappy 14" monitor, now it's less than half that. The main problem here is one that can be found practically everywhere in consumer electronics these days: the ordinary has become much simpler compared to the old days, wheres anything even slightly out of the ordinary is becoming harder all the time. So if you want to buy a bog-standard PC with the latest Windows installed, decent figures regarding CPU MHz, harddisk size and amount of main memory, you can get such a machine practically anywhere these days. But woe to you if you're going for anything slightly out of the ordinary. You want Linux instead of Windows? First try to find a vendor that actually sells you a computer without Windows preinstalled, then do the Linux installation yourself. You want to find a nice, high quality US layout keyboard for programming over here in germany? You're screwed, all you can find is low-quality crap (e.g. Cherry) in national layout; you may be able to find non-national layout on rare occasions, but I for one was not able to find a US layout anymore, let alone a quality one, so I actually ordered a pair of IBM buckling spring 'boards directly from the USA. Practically all keyboards you can get these days are crap quality (at least compared to my IBMs), they only differ regarding pointless fluff like super-special keys for all sorts of bullshit, or IR transmission. The tragedy continues with practically all other components, off-the-shelf systems are just optimized to look good regarding those three figures (CPU, disk and RAM, in case you had forgotten); quality is of little concern because everything has to be cheap. Want a silent system? Not in a standard PC which uses cheap fans for cooling, while silent components quickly raise the price by several 100 Euro. A CDROM that doesn't sound like a jet plane when spinning up? Too expensive (e.g. older Plextor drives, contemporary ones aren't something to write home about either). A three-button mouse? "I think I've seen such a beast in a catalogue in olden times as a lad...", in other words: it'll take you a while, the only ones you're guaranteed to find anywhere have that retarded wheel. If you look around long enough you may actually find a dealer that sells quality systems, or you may just buy the components and assemble them yourself (if you can still find quality components anywhere, that is), but most of the complete systems you can buy are rather crappy as far as I'm concerned. Unfortunately, the same is also true for other contemporary electronics such as VCRs or HiFi systems; through years of brainwashing, the consumer now actually believes all it takes is adding the word "digital" somewhere in the product specifications to make practically anything a better product.

Back to PCs as such. A PC is basically a rather horrible collection of legacy problems leading to such classics as interrupt conflicts, but given a decent OS it can make a pretty nice machine, in particular regarding processing power per buck. Of course "decent OS" rules out Windows: that's basically a crappy graphical user interface that had an OS shoved under it over the years. It has always been inefficient and always will be, has more security holes than even the most tolerant user should stomach and has a tendency to do things it shouldn't do which has earned MS many an hour in court. It's also a myth that Windows was easy to use or a well designed GUI paradigm -- if you want to see what a well designed GUI works like (note that I wrote works rather than looks, people usually put far too much emphasis on the graphical part, ignoring the far more important user interface) you should have a look at RISC OS; a well designed GUI doesn't need things like file selection boxes, for example. Windows has lots of buttons to play with which may be nice for the utterly computer-illiterate but fails to cater for the needs of more experienced users (and by far not everything can be achieved with keyboard shortcuts). This has led people to actually believe that GUIs were primarily of use for novices rather than having a look around at alternatives or clearing their minds and really thinking about ways a good GUI should work. People no longer think "save data", they think "open file-selection box", for instance. They adapted to the shortcomings of Windows, rather than adapting the computer to their personal preferences as it should be. Always remember: it's not eye candy or number of widgets that make a GUI a good GUI, it's consistency and efficiency. The ideal GUI is not one that hits you with bells and whistles like an avalanche but one you don't notice consciously in the first place because the best thing you can say about a GUI is that it doesn't get in the way.

As far as operating systems for PCs go, there is fortunately some choice now, in particular Unix flavours such as Linux or BSD. Unix systems tend to have a rather steep learning curve, but once you've mastered the rocky start you'll soon appreciate the possibilities you have with a programmable shell, configurable window managers and keyboard handlers as well as countless other things that are far more difficult (if possible at all) to do on Windows. As far as the desktop goes, things are about on par with Windows ATM. Whether that's a good thing or a bad thing is another question. Personally I think this is a huge opportunity down the drain and therefore think it's a very bad thing. The programmers behind projects such as KDE, Gnome or Qt had the chance to start from scratch and do a well-designed GUI without legacy considerations. What they actually did was look no further than Windows (of all the lousy GUIs out there they had to choose the worst) and reimplement that, by and large -- what a fucking waste of manpower! Fortunately even Windows-clones like KDE or Gnome are still much more configurable than Windows, so you can usually get them to behave halfway sane, but it's nevertheless a crying shame that rather than doing some real research in GUIs or just thinking the concept through, they wrote Windows clones with very few original ideas (all the sadder considering that MS themselves aren't exactly known for having original ideas). The new implementations may have some big advantages regarding program design compared to older libs, but to a user Gnome or KDE look and behave more or less exactly like any other old GUI toolkit on Unix or Windows; and who needs another mediocre toolkit? People's reply to this is often that this was the best way to get people to migrate from Windows, but that seems dubious to me, at least I don't think it's the best way to achieve that goal. The reason for that is that as long as the Linux community doesn't start making a better desktop than Windows (like they actually did make a much better OS than Windows) rather than just copying it, the only argument for Linux rather than Windows on the desktop is price -- and since most users don't even know where to get their PCs without Windows pre-installed, the price argument will only appeal to big sites such as companies or civil administration, but not the average home user. As far as Joe Public is concerned, Windows-on-Linux clones will not gain serious ground against Windows because Joe already paid for Windows when he bought his PC and doesn't want to install an OS himself anyway. The fact that the Linux kernel is better than the Windows one won't change that, because the office format issue mentioned above (and of course games play a major role too) will be more severe for most potential users. Fortunately there are still the traditional Unix window managers around, such as CTWM, FVWM and many others. While these usually don't provide a consistent or graphically pleasing desktop they can be configured rather exhaustively and thereby made to behave quite acceptably. Personally I've always used CTWM on my Unix machines; the resulting desktop doesn't stand a chance against RISC OS' WIMP, mostly due to lack of consistency in X11 apps, but it's still a league above Windows as far as usability goes.

Don't rule out alternative hardware platforms either. I used an old RISC OS computer for most of my daily work for many years (its GUI is still lightyears ahead of everything else, as it actually helps experienced users rather than crippling them). Unfortunately, its hardware has fallen behind PCs way too far over the years, so these days I'm using a PC running Linux for everything. So why not consider something else than a PC running Windows to do your work, even if it may be less convenient in some cases. Try Linux, even with KDE or Gnome if you have to, you can always replace them later on with a sane environment, but do something about the MS monopoly. As long as you still buy stuff from MS, they couldn't possibly care less about your opinion. If you rant against MS but are too lazy to actually do something about it you're just another superfluous hypocrite. Don't be a lemming, boycott the Redmond Mafia!

RISC in Peace

Once upon a time, there was a british computer company called Acorn, who in the late 80ies developed the ARM RISC-processor and the Archimedes computer using it (an 8MHz ARM2, to be precise). The CPU division was quickly turned into a separate (and highly profitable) company ARM ltd., whereas Acorn remained as an ARM client for the desktop market, with its own operating system RISC OS (or at least it passed for an OS back in those days). Thus, Acorn were the first to release a RISC computer for the home user in 1987. Don't believe the claims on Apple's part about having done the same thing seven years later with their PowerPCs, it's a lie; and one that's all the more annoying in the light of Apple being a shareholder of ARM ltd., so they knew perfectly well they were lying. I first came into contact with Acorn computers in 1993, when I bought an ARM3-based A5000, which for a long time did most of the things I wanted to do adequately, although it was certainly well past its best-by date towards the end. I eventually got a 2nd hand StrongARM RiscPC and the A5000 took the back seat. Why Acorn computers? Well, RISC OS is clearly a product of the home computer era which, albeit its modularity and extensibility which was fine for its day, looks terribly dated by today's standards and is severely lacking in most respects that go without saying on a modern desktop OS (such as preemptive multitasking, process scheduling, virtual memory, drivers for contemporary hardware etc.). However, one of the strongest points of RISC OS-based systems is indubitably its GUI ("WIMP"), which hands down beats anything I've seen anywhere else (including X Windows on UNIX, MS Windoze and System 7). Apart from standardized and very efficient use of all (three) mouse buttons, it has had things like an icon bar, consistent drag&drop and fast anti-aliased fonts when Micro$oft were still having trouble with overlapping windows.

But apart from the dated OS, there have also been severe hardware problems since the 1994 RiscPC release, which got successively worse over the years. ARM ltd. were very successful in the embedded market, so their processors were developed for the requirements of that market, which differ from the requirements of the desktop market in some key respects, e.g. low power consumption being far more important than performance and backwards-compatibility being only marginally more relevant than the motherboard colour (if ARM can save 5% power by breaking compatibility, power will win any day). Consequently, the original RiscPC 600's ARM6 didn't have an FPU and couldn't be fitted with one either (to this day, the only RISC OS hardware with FPU is either ARM3-based (25MHz, around 1992), or some ARM7FE-variant, none of which goes above 50MHz AFAIK). In 1996, we got the 200MHz StrongARM, which required changes to OS and many apps and was suffering from the RiscPC bus being far too slow to keep it fed. In 1998, Acorn went bust shortly before the planned release of the successor model codenamed "Phoebe"; the rights to RISC OS went to Pace, who licensed OS distribution and development for the desktop market to RISCOS Ltd, whereas some companies like Castle continued selling the old Acorn kit or developed new machines like Microdigital (the latter usually problematic, in particular MD's Omega is a legendary piece of almost-vapourware and highlights the gullability of some RISC OS users to an extent that makes you hope no used-car salesman ever gets his hands on their addresses; search the web if you want to have a laugh and play a game of "where are they now?"). In late 2002, Castle released the Iyonix, an XScale-based RISC OS machine they developed which removed some of RISC OS' legacy hardware-dependencies and thus came with off-the-shelf components where previously custom chips were required (e.g. for graphics and sound). Since the XScale no longer supported the 26-bit modes used by RISC OS systems so far, OS and apps had to be adapted once again (you hopefully see the pattern emerging and what I meant with motherboard colours), resulting in two RISC OS branches, a 26-bit branch maintained by RISCOS ltd. and a 32-bit branch maintained by Castle. Shortly afterwards, Castle acquired all rights to RISC OS from Pace and after some public mudslinging with RISCOS ltd. regarding desktop rights came to an arrangement. Confused yet? Well, that's divorce settlements for you. Back to the Iyonix: the machine specs looked OK-ish at the time, albeit certainly nowhere near the amount of bang per buck you could get with a regular PC (600MHz processor, 200MHz bus, old GeForce 2MX); I had just bought a PC for use as Linux machine shortly before the Iyonix made its first appearance, however, which saved me from even being tempted to buy an Iyonix right away. I write "saved" because it quickly became apparent that apart from teething problems to be expected from such a radically new design, the Iyonix (or rather its processor) still had some serious memory performance issues, which basically means it could copy around 40MB/s in main memory and about 110MB/s in 1st level cache -- in 2002! In absolute terms, this is faster than a StrongARM RiscPC, but not relative to CPU speed; effectively one of the biggest problems the platform had had for the past 6 years basically got worse. So with still no FPU on the horizon for a machine where the mainboard chipset can copy data from disk into main memory faster than the CPU can copy data within main memory, who wouldn't be tempted to buy one for a mere 2000 Euros? Sorry folks, but no way am I buying a severely overpriced pocket-calculator.

So now I use Linux on a normal PC for my daily work and only fire up the old RiscPC every once in a while to check whether my port of VICE still works; if it weren't for that, the old RISC OS hardware would long since have been moved to the attic. I'd really like to see the WIMP preserved, however, as it's clearly the superior GUI (small g, capital UI; the exact opposite of where everybody else is heading), but since the RISC OS market seems unable to admit that both the hardware as well as the actual OS are anachronisms which ought to be sacrificed so the one thing worth saving can be saved, it looks like it'll die along with hardware and OS. There is an emulator VirtualRPC for Windows and Mac now, which on contemporary PC hardware runs a little slower than an Iyonix for purely CPU-bound (integer) stuff, and is orders of magnitude faster when it comes to IO, memory and FP-code; normally that sort of thing should convince even the most hardened RISC OS-enthusiast to just cut the crap, abandon ARM hardware once and for all and either migrate to emulation on PCs or maybe even try what Apple did and port to a different hardware platform (Apple are actually doing it again at the time of writing, porting from PowerPC to x86). Not in the RISC OS camp, though, where many people would rather spend twice the money to get hardware with 5%-10% the integer performance of an off-the-shelf PC than dump the concept of "RISC OS hardware". Manpower that could be used to migrate to a more suitable hardware platform is wasted on trying to keep up the facade that the current ARM-based hardware was viable for a desktop system rather than the pathetic failure it proved to be in that field for anyone with a full set of marbles. Manpower that's wasted trying to maintain and moderately improve what is essentially a toy OS. This sort of thing is really completely pointless, even if the available resources allowed RISC OS to be improved sufficiently to stop being a bad joke compared to freely available operating systems like Linux or BSD, all that would achieve is wasting manpower reinventing the wheel. After all, we're not just talking the OS core here but drivers as well, in other words an area where constant updates are needed. And for what? An OS is not a GUI philosophy, there is little ground for discussion what an OS should do and how it should do it; the few issues where any kind of informed argument can take place are details like microkernel vs. monolithic kernel, real-time capabilities or multiprocessing scalability, in other words areas that are downright esoteric compared to the problems RISC OS has. What Apple have done with MacOS X is exactly what should have been started in the RISC OS camp a long time ago: don't waste your energy reinventing the wheel (and in all likelihood come up with an inferior model anyway), take a tried and tested and well-maintained OS (BSD in Apple's case) and implement your GUI philosophy on top of that. Users choose systems like Apple or Acorn because of the GUI they see, not the OS beneath it they don't see, and the requirements for developing a competitive OS are no longer anywhere near the trivial level of the home computer era, so it's simply idiotic to develop and maintain your own OS unless it has unique selling points -- which RISC OS doesn't have on the desktop (things may look different in the low-power embedded market, but that's beside the point and I couldn't possibly care less), only the WIMP does -- that one in spades, though.

Still, all we've heard for all those years is that anything is more feasible than migrating once and just doing the GUI from then on, even developing and maintaining your own custom hardware and OS and adapting OS and applications every couple of years because the only ARM processor faster than what you currently have is incompatible and having to put up with an OS that lacks preemptive multitasking, process scheduling or memory protection worth a damn (and doesn't have a file cache either, for that matter) and hacking together rudimentary drivers in regular intervals because the components you do have drivers for are no longer available and not being able to do anything useful in real life terms with 3D hardware because you lack an FPU and hardly ever being able to do a straight port of anything useful or fun because you lack an FPU and not being able to run some applications at all without tearing your hair out because you lack an FPU and on and on and on. Even now, with a functioning emulator available and PC hardware lightyears ahead of the best the RISC OS market has to offer, we're told that a couple of guys who like soldering together custom hardware in their garage and another bunch maintaining a sorry excuse for an OS in the garage next door is the way forward. Sad, really. And this'll be the end of the platform, because with the kind of hardware we've seen those past 10 years, it has neither a future nor a right to one, and the same goes for the OS. A real pity regarding the WIMP, but then I put far less emphasis on the GUI these days and I'd certainly miss my Unix shells and decent hardware far more than the WIMP, so if the only way to get something like the WIMP is to spend a premium price on hopelessly underpowered hardware I'll pass and good riddance. I hope VirtualRPC will be released for Linux eventually so I can run some of my old stuff without booting the old RiscPC, but unless a major miracle happens my days using RISC OS are over. I don't know whether I'll stay with Linux as my main OS, maybe I'll have a closer look at BSD, Solaris or even the new x86 Macs. The only thing that's certain is that so far I could live just fine without Windows and I don't intend to change that.


The C++ Exception Scam

It's funny how sometimes the most retarded features are sold as the hottest thing since sliced bread. Take C++ exceptions, for example: often presented as a "killer feature" of the language, it is in fact a highly questionable construct few people would actually use if they properly understood the implications. If your reply to this is "what implications?" you shouldn't even consider using exceptions in your programs. But read on...

First let's have a look at what exceptions really are. Because exceptions are not a gentle language extension but rather a construct that totally breaks the traditional program flow control. It is rather bizarre that on the one hand people are frowning on the goto construct (rightly so, IMHO), but praise exceptions on the other, which are much worse than goto in that they not only can jump to an arbitrary position in the current function but to basically anywhere in your stack trace. Exceptions can therefore be regarded as the ultimate goto, an invisible, magical wormhole out of any bit of code. This jump is a very expensive operation, since it involves rolling up all stack frames between the level of throw and the level of catch, plus there's always some runtime overhead when using exceptions in the first place. And exceptions can be raised by basically any C++ statement, if you allow them, in particular when you're using templates, and even with simple code like if (cond) body1 else body2 you can no longer be sure that either body1 or body2 will be executed because cond itself can raise an exception. There are well-known examples in the C++ literature (e.g. Exceptional C++) with a simple 5-liner where about 20 different hidden exceptions can be thrown in various places (i.e. you don't see a throw statement in this code, the exceptions originate from things like operator overloads and constructors). It's kind of funny that the people who write these kinds of books will readily point out these problems without realizing that what all these problems boil down to is that exceptions are simply a bad idea; or they simply don't want to admit it. Because in the real world, programmers just don't have the time to agonize over 5 lines of code until they've identified all possible sources for exceptions. Just look how long it took to make the STL exception-safe, and compared to most "real" software out there, the STL is peanuts!

When using exceptions, the goal has to be to make your code exception safe, i.e. when an exception is raised the program must remain in a well-defined state, in particular not losing resources. How can this be achieved? You can either ignore the exceptions and let one of the functions which called yours handle it, or you can handle it yourself with a try ... catch block. The first approach means you're writing (or rather trying to write) exception neutral code, the second approach is a much easier to understand strategy. The problem most people overlook is that exceptions don't magically relieve you from the tedious task of cleaning up after an error, and the real problem regarding error treatment in programs is not reporting the error to the user but recovering from the error as well and efficiently as possible. In the case of exception neutral code, your only way to clean up when an exception was raised is putting all your algorithms that have side-effects or claim resources into classes and making sure that the side-effects are rolled back and the resources freed in the destructor. When you're catch()ing the exception yourself, you can get away with much fewer classes and the cleanup code is more straightforward as well. So gratuitous use of try ... catch is the way to go, right?

Totally wrong, unfortunately, because in that case you're using exceptions just like you'd use traditional return codes. The use of exceptions gives you no advantages over the old if (error) cleanup-and-return routine, not even conceptual ones; using exceptions rather than return codes will introduce overhead, however, so basically you end up with a much less efficient way of doing things the same way you used to do them without exceptions. The overhead per try ... catch block will be moderate in this case, because the stack between throw and catch will be flat, but first up it will still be nowhere near as efficient as a simple return code, and second you'll have several such blocks until you've reached a sufficiently high stack level where you can stop handling this exception. There is only one situation in which using exceptions is conceptually nice and that is when the stack between throw and catch is very deep, because in this case it can be argued that the error can just pass through all those stack levels and be handled somewhere high up on the calling stack, in other words exception neutral code. Of course the problem with this approach is that you still have to clean up, so you must correctly identify all places in your code where exceptions can originate and make sure that if an exception is raised your program is not left in an undefined state or losing resources, and you have to do that without using a single catch statement anywhere but at the level of your function call stack where you can fully handle the exception, because otherwise you wouldn't have exception neutral code anymore. As already mentioned above, writing exception neutral code without overlooking possible sources for exceptions is rather complicated and error-prone, because even small mistakes will result in badly behaved programs.

Another problem with exceptions is that there's no way to automatically get the exact position where the exception was thrown()n, at least not in the generic language standard (and we wouldn't want to start using proprietary extensions, would we?). This sort of thing may be acceptable if the exception originated in an external library we don't understand and can't modify, but it's certainly not when developing your own code. Sure, some exceptions are pretty easy to track down, in other cases you can build exception classes that contain the necessary information and manually fill it in before actually throw()ing the exception, but it requires a lot of discipline to do that consistently in your entire program and you can have the very same thing by writing the error position into a log file in traditional error handling, so exceptions have no conceptual advantages whatsoever in this situation either. As many developers will confirm, most of the time a simple coredump and stack backtrace are by orders of magnitude more helpful in debugging a program than an exception which first gives you the tedious task of finding out where it was throw()n in the first place before you can actually address the reason it was raised.

In conclusion: there are basically two ways to achieve exception safe code: gratuitous use of try ... catch blocks on the one hand, and in-depth exception analysis of the code plus correct cleanup in destructors on the other. The first approach gives you runtime overhead only and no conceptual advantages at all; code that falls into this category is a prime example of using language features because they're there rather than because they make sense, and the people who wrote this code should ask themselves whether they really understood exceptions at all rather than just using them. The second approach is very nice conceptually, no question about that, but basically impossible to realize in actual code because it'd totally shoot development times through the roof. Are you up to the task? I've had years of experience with C++ and frankly I don't think I am; I don't even see a reason to, I have more important things to do than look for every instruction in my code that might (transitively) throw an exception (if I allowed them). My personal approach to exceptions is therefore simply not to use them. There are many nice features in C++ I very much like to use, such as templates, polymorphism and operator overloads, but exceptions are a very bad idea as far as I'm concerned. People who use them are either totally ingenious (can write perfect exception neutral code) or rather stupid (either have no idea about the side-effects of exceptions or clutter their code with try ... catch blocks) -- you decide where you stand.

Back to the manifesto index.