B.E. (Comp. Sys. Eng.)
also known as zed
& handle of notzed
Swings and rounabouts.
Ok, back to a bit of hacking. There's something about spending all day chained to a windows xp desktop and a visual studio session that simply sucks the life out of you. Leaving little time or energy to pursue other hobbies or interests. But with the lengthening days and a lack of TV to watch, I've found time to look back at Java again.
The last time I did any Java was 1998-1999, so it's been a long time between drinks, so to speak. Although I have tried to like eclipse, I never really did get to like it. And the last time I went looking for plugins all the interesting looking ones (for what I was looking at) were out of date, or had nothing but dead links in them - so that put me off even bothering. So this time I tried netbeans - and by chance 6.5 had just been released.
Well, I was mostly impressed actually. Swing actually isn't that bad. And it's a breath of fresh air to have some decent documentation for a change after dredging through the crap for WPF. Or even to look at the source-code! Wow! Almost like the 'good old' GNOME days where I built everything from glib up locally so had a much better time of debugging. At first the GUI editor seemed as crappy as the WPF one - defaulting to using hard-coded layouts. But then I couldn't find any table or v/hbox/stackpanel's and thought the whole system was really stuffed. But then I discovered that the layout container can automatically snap to theme-specified offsets, and can align columns and rows, and actually aligns baselines rather than bounding boxes. Wow. I don't know if it'd be easy to use from code, but with the netbeans editor it is quite nice. Far better than gtk+'s (and WPF's) fixed-sized crap and having to worry about whether it's an inner or outer margin, it just does that automagically, and even scales for different themes. Nice. Anyway, after struggling with vbox/hbox/table (ugh), and then grid/stackpanel (sigh, even worse) for years, I instantly fell in love (I think it's a GroupLayout, or maybe it's a netbeans-specific AbsoluteLayout).
Oh, and it all runs quite snappy too. Not that it isn't a bit of a pig (but machines are so much bigger now - so it isn't really an issue), but UI apps seem just fine - infact netbeans is a lot faster than visual studio, in every way. A lot faster. I do wish you could set bitmap fonts though - I'd love to just use 'fixed' as my font.
I thought i'd miss 'delegates' - as a C coder I littered my code with function pointers all the time, I really couldn't get enough of them, and c-hash delegates are much the same (although I don't use them the same way). But using anonymous classes actually looks neater for some reason - it's basically what c-hash is doing 'under the bonnet', and often the delgate callbacks do so little they don't warrant their own function anyway. Properties are sort of nice - but they are only syntactic sugar, they don't actually do anything at all differently to what a getter and setter does - they provide no additional facilities automatically like property monitoring. c# events are quite nice though, again they are really only syntactic sugar, but they do save a lot of boilerplate code.
Apart from that, it struck me just how much like Java that c# really is. In true ms fashion all they did was copy and complicate. The simplicity of the Java language is nice.
Some of the libaries are also nice too. Simple. But others - you just wonder what people are thinking. The JSR for binding properties is awful. It looks complicated and messy to use - much of it in the name of 'type safety' - but then they go and use some scripting language for binding expressions that isn't checked till run-time anyway. Sigh. Everyone seems obsessed with standardising and formalising every little thing too. Sometimes the smaller simpler stuff can just be left to the implementor. Some of the 'design pattern' guys seem to have gone off the deep end - adding over-doses of complexity upon complexity.
The application at work is a single-user destkop application that uses a SQL RDBMS as a backend. One thing we've contemplated is moving it to a multi-user, server-based system. Now i would simply not consider using .NET to do this. So that was another reason to re-visit Java. Ahh j2ee 5. Well one thing that can be said for Java. You've got a lot to choose from, a lot of it is free, some of it is extremely good quality. It's really like another world completley from the closed money-grabbing greedy eco-system in the stagnant cesspit surrounding the outflow from microsoft. There just isn't any comparison - they're not even in the same league, perhaps not even the same sport.
I'm still trying to get my head around the persistence framework. I kinda liked the older Entity model, because things were coded directly, at least initially it seems easier to understand (or at least, I figured out how it worked at one point, although I never used it). The newer model does a few things unfamiliar to me, but i'm sure I could get used to it. Our .NET code uses a custom entity-like system and a lot of custom SQL (lots of messy joins), which would be quite difficult to move to the persistence query language, but most of it could be moved to views or stored procedures as well (and probably should be at that). I'd considered nhibernate, but it was a bit immature at the time, and quite frankly I didn't see the worth in investing all that time to learn another meta-language to re-describe what was already in the database (and I still dont - i'm glad as hell that j2ee 5 uses xml meta-descriptors sparingly).
The new EJB stuff is quite nice in some areas. Last time I worked on Java this stuff didn't even exist. We were using CORBA directly. The automatic CORBA servant generation from interfaces is ... well it's nice. But there are some `strange' limitations. Well they're not really strange - it forces a particular architecture which will scale transparently. But if you don't really need that it does limit things needlessly - like passing around client-hosted interfaces. Although facilities like JMS can often be used to implement the same sort of things anyway. JMS is nice.
One problem is that although all of these really nice facilities exist - it can be a real pain getting them to actually work. I was playing with JMS and even though I was using the 'correct' mechanism for sending messages, I was running out of connections. Bug in the app server perhaps? I'm not sure. Not knowing is a bit of a problem. And with a distributed application I was hoping to re-use the persistence objects remotely, but that doesn't really work. Ho hum, back to the (manual) grind of load--copy to mirror object--return, etc. In another case I tried changing a persistence object from being LAZY to EAGER loaded - it crashed with what seemed to me an interface mismatch between server and client. Couldn't work that one out. Actually in general netbeans+glassfish seems terribly unreliable at consistently rebuilding dependencies. Maybe i'm doing something wrong, but even with less than a dozen files I often have to run it twice, or shutdown everything and clean+build to get new code loaded (this is something that affects visual studio too).
I shall continue to tinker.
Sackboy, alias The DRM Kid
Ahh well, so my `worst fears' about the LittleBigPlanet (LBP) moderation system seem to have come to fruition. I knew when Sony decided to recall all copies because of a possible religious offence only days before a world-wide launch, at no doubt very high cost, we were in for a messy ride if not a complete cluster-fuck.
It seems LBP's moderation system is as harsh as it is limited. From what I can tell, levels get deleted from the server with no explanation for even the slightest infringement - you cannot even play your local copy if you are online, and they retain persistent identifiers so you cannot copy/edit and resubmit them either. There is no ratings system, so everything has to be child-friendly (why does the PS3 have a parental control setting then?), but also it means it has to be inoffensive to everyone - everywhere. Unfortunately, if you look hard enough you can always find someone who will find offence in whatever it is you are doing, no matter how commonplace or inoffensive it may be to you and your associates.
Then there are those nefarious so-called intellectual `property' laws. Trademarked, copyright, or even patented(!) items cannot be represented in the game. So I guess no one-click shopping levels! And these are only compounded by the difficulty in understanding what they actually mean and how they are applied - which might not even be possible if they are trying to comply with some sort of lowest-common-denominator of laws from all countries that have the PSN. For a family-oriented game, almost certainly to be used without supervision of legal counsel, the likely-hood of running afoul of these laws is quite high. And when their home-spun creation vanishes without explanation, I'm sure they'll let all of their friends know of their frustration and anger. Hey maybe they can get Fry in to do some more tutorials - teaching the general populace about the evils of copyright violation!? `Here's the DRM Kid, he'll protect us from those nasty pirates!' (rather ironic if it happened, given his recent piece extolling Free Software for The GNU Project's 25th anniversary).
Ahh, what a mess these laws have created for all of us - even those who wanted them.
But what about the game itself?
I actually bought LBP on Friday and spent a lot of the weekend playing it. I had considered the censorship issue and thought it would probably work itself out. Reading the current threads on the forums does make me feel less of the game, although of course it doesn't change the gameplay of the built-in levels.
It looks gorgeous, with the materials, textures and lighting, sound and effects all spot on (apart from the jump 'whoosh' sound, which I don't particularly care for). The level designs are varied and mostly interesting, and there are constant puzzle elements when trying to advance, or get to bonus items on a level. The controls are over-all pretty good, although since everything is physics based it can take a little getting used to compared to programmed behaviour. The 3 levels of depth is a bit fiddly at times, but it isn't a deal-breaker. It has some frustrations - checkpoints are limited so death can mean the repeat of an entire level. The `emote' functionality (happy/angry/sad/scared) is mostly pointless - usually you're too far away to even see it properly, and it sort of wobbles and flickers a bit when you can - which makes it look a bit silly. It also definitely isn't a young kids game, it is too difficult for that, requiring exact timing and jumping in places - although some of the mini-games are for everyone. Multiplayer can make level traversal more difficult - but that is part of the fun too. e.g. interfering with someone so you get more of the goodies. And each level has at least one 2-player puzzle for extra items. But on many scenes the camera doesn't pan enough, so you can get left behind very quickly - after a short timeout you die until whatever player the camera is on gets to the next checkpoint. I haven't played online.
The user-created levels are a mixed bag - there are so many that it is difficult to find the really good ones. There are too many (utterly pointless and annoying) `get trophies quick' levels high in the list, and people are already asking on forums for levels to be `hearted' for the purpose of gaming the system. So it's mostly just a random guessing game, although with time the tagging system will probably become more accurate and thus useful. Levels load pretty quickly - although i've had some failed loadings, but a retry normally has them work. The polish and difficulty varies greatly, but there is a fairly wild mix, from simple platform games, to races, to puzzle games and so forth - even a side-scrolling shooter with a sort of string-puppet feel to it(!).
The level editor is necessarily quite complicated. But building things out of real materials with motors, switches, connectors and so forth is fairly intuitive - and a heap of fun. It will take a lot of time to create a good level - and with the broken moderation system in place, it reduces the enthusiasm somewhat - it could vanish without explanation if you make a mistake and offend some culture you're not even aware of, for example.
I don't know if the moderation thing will get sorted out. Hopefully it can be toned down a bit - at least it must tell people what they did wrong and give them the opportunity to fix it. There are plenty of people who want to make levels in good faith, but if just one level gets deleted permanently with no explanation or recourse, I can imagine they'll give up on the game forever - and probably be angry enough to let everyone else know too. A tiered system with some more grown-up `may-offend but not-illegal' content would be bloody nice too. e.g. no possibility of art or satire, just `commercial-mainstream that is so inoffensive it manages to offend' type levels - which will end up being boring. A multi-tier system is probably too much to ask, but they clearly need a better system for accidental problems through ignorance - otherwise this game will lose sales.
I suspect it may be a scalability issue - rather than check or edit the levels to fix problems they just blocked the level (or a grief-report blocks the level immediately, until it is reviewed, and they have a back-log). I'm still willing to give them the benefit of the doubt, but I guess time will tell on this one, as it always does.
IView and PlayStation 3
The new 2.5 firmware for the Playstation 3 included some level of Flash 9 support - which opens up quite a few of the sort of sites which one might want to access from a TV connected device. One of these is the ABC's Iview product - which is like a DVR of the last couple of weeks of some of the ABC's TV. I have no idea if it works outside of Australia.
There is a little trick to getting it to work on the PS3, but once that is done, it seems to work ok. The first time you go to www.abc.net.au/iview, you need to check the 'do not show this again' thing, and then restart the machine (not just the browser). Then the next time you open it, it should load up ok - although it can take a little while to get going. Viewing other flash sites in a given session also seems to upset it sometimes, but again a restart should get it working again.
It is a bit slow though. At the full resolution at 1080p, it barely works - infact I got no video at all. At 720p it works, but is a bit jerky, 576p is a bit better, but it's a pain since you need to change the resolution of the whole machine, not just the web browser, and 576P looks pretty bad on a HD TV. Another tip - if you change the 'Resolution' setting in the 'Tools' menu (the actual layer on which the browser content is rendered internally), to -2, it speeds it up further, but at a cost to the video resolution. Still, either way it is fine for watching non-action content such as news and talking head shows. I think Sony need to throw some more effort at the video codecs and flash player, since the CELL should be more than capable of running at least the video at full speed. It would be nice if they implemented 'full-screen mode' too - currently you have to zoom and maybe fiddle with the view a little to get it to show nicely. Perhaps a '-3' setting for the Resolution setting when in 1080p mode could help too, and just scale things up - rather than forcing one to reset the display on the whole system.
The menus themselves are unfortunately all done in flash - which means that although they 'look nice', you need to use a mouse pointer, and can't 'cursor-key' between links as you can on HTML pages. This is mostly just an inconvenience when using a controller, but in some cases it breaks the interface. The widget set they use is a bit shitty, for example in the full programme listing you can only scroll by grabbing the handle in the scroll-bar widget -- which is far to coarse and you cannot get to every item in the list. It would be nice if they had a simpler, alternative interface that didn't do all that pseudo-3d shit and background animations as well - all it does is make it a pain to use. The video quality itself is pretty low - well on a big tv it really shows anyway. It's probably comparable to 'high quality' mode on youtube (when the source material is broadcast quality) - it's ok and quite good for talking heads, but action and pan shots are joltingly difficult to watch. Certainly things could be improved - but it is a nice little addition to the free services available. At least until we get PlayTV over here. Perhaps Sony can try to work with the ABC a little, as they seem to be doing with the BBC's IPlayer, to help improve performance and the accessibility from a controller interface.
Another issue -- iview traffic is all metered for almost all ISP's -- unlike the older ABC video content which most decent ISP's graciously provided as un-metered traffic. This is because of the unfortunate use of Akamai for their content delivery -- I find it somewhat surprising and quite disappointing that the ABC would turn to an international service provider for an Australian service, rather than a local company. There should be plenty which are more than capable of supplying the technology required (Akamai was probably seen as a quick and easy solution - but time will tell if it was a good one). They say they are working on a way around this, but while they use Akamai it sounds like any solution will be flakey (as in iiNet's case it seems, according to whirlpool), and/or costly for ISP's to implement - so they may not do it. I guess we'll all find out soon enough. Although on the other hand, if you have a decent ISP, and aren't using your connection to download movies and tv shows, you probably have the quota to spare. And if you are, you probably aren't interested in iview anyway.
State of Indignance
I'm still around. I haven't been doing anything particularly interesting of late and the copious news and blog reading I've been doing has made me too angry and aghast to feel like posting much.
Recently one of my uncles died - at 98. Very impressive feat, he had a large family and strong ties to his community and was an all-round nice bloke. Not that I've seen him for a long time, but the family used to visit when I was a kid. I hadn't quite realised how tied into the church his family was. It was interesting to see the role that the Church played in forming and bonding his community together. It got me thinking that maybe this church thing isn't such a bad idea after-all. But then during the sermon it veered off into dreaming about him enjoying his after-life, and it just felt sad, all of these good people believing in a silly fantasy. Although they did celebrate his wonderful life as well, the emphasis on the after-life seemed both unnecessary and childish.
But apart from that, it got me thinking about how church and community goes together. He was part of a small country town in a productive part of the country (even drought years aren't so bad there generally). In such a setting where most of the community knows each other, I can see a church as being a quality way to help people socialise, and provide some common ground to bind people together. But does this work in a larger town or city? I suspect it does not scale very well. And like many other things, the scale of humanity has out-grown these ideas, and it is probably time we moved on. Which society is doing anyway, as reflected by census results.
The US election. Wow. That's been quite an interesting one to follow. Normally I am not to fussed about American politics, at least to the level of following an election campaign (even when I was in the USA in 2000, or was it 2004), but this year something has been different. Certainly there's no shortage of `character' in the players this time around. Which makes for some interesting opinion pieces out there. The hate and racists fuelled republicans really seem to be coming out of the wood-work, encouraged in large part by the divisive wedge politics the right loves to play with. And I think like many, the prospect of an educated and thoughtful new face running the USA for once (during my life-time) is an intriguing prospect. And really, for the sake of humanity, I think all the rest of the world is hoping for an Obama victory; McCain is one angry, grumpy old man who is likely to do anything in a fit of rage, or just die of old age, and Palin just wants to estabilish a fascist theological state and probably help accelerate the apocolypse (and all that before dinner with the family). Of course, with the (world) economy in such a mess, victory may be a bit of a poison pill, but there isn't much choice. Another point of interest is how voter-disenfranchising activity is even remotely legal or tolerated or even considered in the first place. What sort of a fucked up `democracy' is that?
Ahhh the world ecomony. Hasn't affected me at all yet -- apart probably, from a bit of super which didn't make money even when things were going well. So again, fascinating to follow the ups and downs of the stocks and whatnot. I can't really add much to the teeming cesspools of comment already out there -- much of which I have read -- only that how much it sucks that at the end of the day, the rich will get richer and the poor and middle will pay for it -- again. A few new regulations and some hardship for a few years -- until the cycle repeats itself. But while capitalism reigns there is little choice -- not everyone can be rich, so plenty have to be poor. At least while the rich get unfair electoral representation (i.e. can bribe officials).
Where is the world headed I wonder? It is easy to get pessimistic. Even without the threat of global warming things are not looking terribly rosey. With the world population continuing to rise unabated, with land continuing to be degraded beyond use, the sea over-fished, toxins and contaminants continuing to build up in the food chain, water being poisoned or hoarded, personal greed continuing to trump national welfare, is there really a bright future for humanity? I think an area to watch in the next few decades may be India -- as a representation of the problems to be faced by the entire world, particularly over population, pollution, religion fueld ignorance, wealth disparity.
Add global warming and things could really get nasty. I read a few sceptic and science blogs, and it is surprising how many of them are overly sceptical of global warming (sceptical should mean require hard proof, not just being universally cynical), or don't see it as an issue to be concerned with to the point of regulation or spending money. Maybe in England some nicer weather isn't seen as such a bad thing, even if it means wilder and more frequent storms occasionally. But there are going to be some pretty nasty consequences even for them (disease spread, costlier food), and the risk and cost that we are trying to fix something we had nothing to do with pales into utter insignificance against the risk and cost of not fixing something we actually caused.
And the more mundane. Work continues to be pretty dull. Mostly writing little data importers, and doing a lot of manual data verification and manipulation. I haven't had to write any really new code for months. I tell you what though, Microsoft products and their proprietary file formats have meant a lot more work for me, and a lot more frustration. Where I can, i've moved to using simple CSV files for most data files. Apart from being trivial to read and write, they are also bloody fast! But excel has to make using any `non-native' file format a right pain in the arse. To just save a file in CSV format takes 4 extra clicks, and it still warns you when you close the file that you haven't saved all changes -- even if you have. So a big F.U. goes out to B.G. the big C for helping to make life more difficult for all of us with your crappy tools and shitty file formats.
Haven't bee playing many games either. I got to the last battle in Rogue Galaxy but can't be bothered finishing it (lack of save points). I'm pretty pissed off LBP got delayed, particularly as it certainly looks on the surface to be pandering to irrational beliefs of a random internet poster, but i'll keep those thoughts to myself, at least for now. I just hope this doesn't indicate a predisposition to censor `offensive' content once the service goes live. And speaking of censorship, that idiot Conroy should go for trying to bully Mark Newton for stating the obvious about the ludicrous scheme to filter the entire Australian internet.
Tearing, Game Demo's and Controls
Ahh game demo's. Do they really convince anyone to buy a game? Or do they just convince them not to buy it? I can feel a whinge coming on ...
A couple of new demo's on the PSN today. Mercenaries 2, and Fracture. Actually neither are games I thought looked interesting enough to buy before I tried their demo's, but having tried, they're even less likely to be swapped for some hard earned plastic.
And the main reason? Controls. Both let you 'invert y', but not x! For some reason - I think Jak and Daxter - I learnt to use camera controls opposite to the rest of the world.
After a couple of minutes of looking at the floor or the sky or spinning in circles, I gave up on Fracture. It was just too frustrating and annoying - even for a demo. They've obviously put a lot of work into it, and other than that, from the small demo I saw, it is probably a competent game, but without inverted controls all it ends up to me is deleted from my hard drive. From what I could tell, the ground-altering mechanic is a little odd as I expected it to be - a bit neither here nor there, but I guess it could 'work' ok if it's use isn't too gimmicky or forced.
Mercenaries 2 wasn't much better. I ran around randomly and somehow ended up where I was supposed to, but well, died. It looks like it could be fun, but running around looking the wrong way isn't. I like the stylised graphics, and the explosions are nicely done.
Both suffered another major turn-off for me too - screen tearing. Where they were too memory-strapped or lazy to use multi-buffering. Some devs claim that using double-buffering would halve the frame-rate when they just-drop a frame, which is true (if you just miss a frame on a 50fps animation, you have to waste processing/wait a whole frame before you can flip, and you end up with a 25fps frame-rate, and spending approximately half the available time/cpu power doing nothing) - but triple buffering doesn't suffer this problem - it's mostly just a memory cost. Well at least the tearing was only minimal, but still it was there, and it is a visual glitch I've found particularly irritating ever since I first saw it on crappy PC games that didn't have the hardware to easily avoid it on every frame (the way Amiga hardware worked you had to go out of your way to make things tear, so it was a real shock). It's such a tiny little nod to quality that can't cost more than a tiny fraction of the ginormous bloody budgets they spend these days - I can't see how the art department could sign off on such a sloppy trade-off (versus dropping the texture resolution slightly for instance).
Ok yes, they were only demo's - and sometimes problems like this get fixed by release time (Burnout?), and often control inversion is also included in the final version - but that doesn't help in evaluating a game from it's demo. It's not like I had planned to buy either of those games (I guess they're not my type of games), but the demos didn't help to convince me otherwise.
While i'm on demo's, last week we had 'Pure', a sort of trick-bike off-road racing game. Weird choice of game mechanics. You have to do slow and clumsy and hard to pull off `aerial tricks', otherwise you don't get enough boost/juice whatever they call it to be able to win a race. Sounds pretty tedious to me. Actually I may have had enough off-road racing with Motorstorm - I'm still not even sure I'll get Motorstorm 2. Well, the local split-screen would be nice, and maybe it'll load tracks and cars faster. Pure does look nice though.
I bought a new HDD for my PS3 yesterday, backed it up and installed it. Very easy process, although the screws holding the drive in its caddy were a bit tight for the jewellers screwdriver I was using, and it took a couple of hours to back it up and restore it (nearly 50GB used). Although disturbingly, it sometimes seems to start up a bit too slowly, and then goes 'missing' at power-up until a restart. No big deal I suppose - if that's all it does. It's a western digital 320GB drive (WD3200BEVT), fwiw. Ahh, i did a search, and it seems to be some sort of interface issue - I tried jumpering the 'RPS' mode on, and so far it looks to have done the job.
I didn't bother backing up the Ubuntu partition, beyond my source tree. I haven't been particularly happy with Ubuntu, and that was even before upgrading to 8.x broke everything. Even after I fixed the boot issue, all it did was leave me with an unusable amount of RAM detected. I had long since got rid of any Ubuntu on my laptops, so I didn't need much of an excuse to jump ship. As i've said elsewhere - i'm sure Ubuntu is just fine for plenty of people, but it certainly isn't for me.
So I spent a bit of time trying to work out what system to install. It was pretty depressing really, it was quite difficult to find ANY quality or useful information at all (or maybe it was this incredibly disturbing video and comments I'd seen earlier in the day?). There are a few blogs and news sites around for PS3 development, but many (most?) of them are quite stale. Often started in a flourish and soon forgotten.
Of those still active, the developerworks forum for Cell development is full of newbies with very basic GNU or C/parellel programming questions for the most part, or weird arguments about performance (e.g. 'why is the ppe so slow?', 'how come i can't get the peak theoretical bandwidth in a memcpy?', sigh.). The beyond3d and ps2dev forums seem to be stuck on the fact that the GPU is inaccessible, or relying and waiting on a couple of guys working on ps3-medialib to deliver some magic, and generally just don't seem to be all that helpful. There are a couple of queries about useful linux distributions, but they are either unanswered or not helpful to me (e.g. they recommended ubuntu). I was starting to think that the whole situation was a lost cause, and certainly nobody seems to be working together toward any common goal. I finally stumbled upon PS3 forums - which seems to be a bit more active, and a few of the sort of questions I was interested in at least had some sort of answer.
Anyway, since there was a decent article on DeveloperWorks about installing FC7, and the IBM SDK 'support', I thought i'd give that a go. Burnt a DVD and away I went - I even checked the media. But unfortunately, it couldn't find the DVD it booted off when it came to looking for packages for some reason - so I couldn't get any further. Bit of a waste of time. I couldn't find any mention of this show-stopper on the 'net, so I gave up. I have FC9 on my laptop and although i'm happy enough that it works, the default setup is far too fat for a PS3, and it took forever to install, so I thought i'd give it a miss.
Someone on ps3forums.com had suggested Ubuntu to a query of what distro to use, but then changed his mind and complained about how much of a timewaster it was, and suggested YellowDog. I hadn't really considered YellowDog - it seemed a bit out of date, and well, just different, but after my experience upgrading Ubuntu - to get features I didn't really need - I thought stability and ease of setup will do over bleeding edge. So, YD downloaded (fortunately my ISP mirrors all these DVD's, so the download is as fast as the phone line can muster). Hmm, Fedora 6 based. Ok, so it's a bit old. Still - the install worked just fine first time. It was also pretty fast, and it boots up pretty fast too. Both much faster than Ubuntu ever did. For some reason the default 'development' install doesn't include ppu-gcc and particularly ppu-binutils, but I found out what I needed, and it seems some of my test code can build and run. I can always compile a newer gcc if I need it.
Ahh well that's done, and i've updated it too, now I can reboot back to the GameOS and forget about it for another few months!
I recently posted my last entry on b.g.o, and I said I wasn't going to rant about what is wrong with the desktop (well I did before I deleted it). But maybe I should have, as with fortuitous timing, my second to last entry about Chrome should have reminded me what Chrome is capable of. I will only say in my defence that I was only considering Chrome as a browser, and maybe as an `ms office' replacement, and dismissing views otherwise (well that is how I use a browser).
First, some background. I had been noticing the trend to move toward Python in GNOME in particular, and I haven't liked it. I know why developers like it (well why they claim to like it), but as a user it leaves a lot to be desired - slow, extremely heavy applications, that too-often bomb out with meaningless backtraces. I had some ideas that could make it palatable to users (well, beyond just debugging), but it relied on some features which Python lacks, so I gave up thinking about it. But Python isn't the only problem.
The GNU desktop is in an awful state - and that's even if you stick to just one flavour and it's attendant applications (I don't know about KDE, but the following is true of both GNOME and Xfce). If you take a default install of your average `distribution', for example, Ubuntu, after installing a rather large number of packages you end up with a pretty login window, and a relatively pretty desktop, and quite a few applications, from basic to outstanding, from buggy to stable. But what is behind the actual desktop? A mis-mash of random programmes the packager/desktop team determined to be useful for themselves or some mythical `average luser'. Some work well, some don't, some are necessary for the basic operation of the machine (auto-mounting and network selection), others are pure fluff, most are in-between. Also - it barely runs ok if you have only 256MB of memory, for example that `older machine' that GNU/Linux can supposedly take advantage of, or embedded/special machines, like a Playstation 3, both of which actually affect me.
One problem is that the `in thing' these days seems to be to write (or re-write!) many of the applets/applications that provide core desktop functionality using Visual BAS... oh oops ... Python. Now Python is a `scripting language'. This means that every time you run a python ap, it must compile the source-code into byte-code or perhaps machine code (I do not know if there are pre-compilers for it). This takes time, and it takes memory, and to do it well it can take a lot of memory and time, and this is one reason traditionally that developers had much beefier machines than users - because they're the only ones who had to do this step, once. If it only compiles to byte-code, then every basic instruction is emulated using a state machine - a 'virtual machine' (VM), which is at least and order of magnitude slower than the physical machine is. Any conversion to machine code and further optimisations which make the running speed faster, also generally cost in memory and cpu time during the compilation phase. For simple scripts and applications this is no big deal, but for more complex applications it can start to add up. Not only that, because many of the libraries themselves are written using the scripting language, every application which uses those libraries needs to recompile the same libraries every time they run - and more importantly store their own copy of the byte/machine code. I will also mention in passing that many of these `libraries' are just `wrappers' - glue code which just calls some `C' library to do the actual work; but someone has to write those too, so either the script engine `vendor' or the library `vendor' must expend additional resources (which wouldn't otherwise be needed) for this work, so the cost isn't born solely by the users.
Scripting languages are just fine for short-lived applications, they run, do their job, and finish, releasing the memory they used - even if it is excessive it doesn't usually matter. And often they are `batch' processes anyway - non-interactive programmes which run by themselves, and so long as they run to completion they needn't be particularly speedy. But now with applets and other trivial applications that run for the entire time you're at the computer, or they require interactive response, they are a potential disaster. You now have a separate VM for every application loaded, with all the non-shareable data that entails. Often scripting VM's haven't even been designed with this in mind, and in that case they may be quite cavalier with their use of memory because it isn't an issue for the workloads for which they were designed. Most of these languages use garbage collection too - but garbage collectors are quite hard to write properly, so there are often bugs, but even when those are all fixed, to get performance they generally need more total memory than they're actually using (sometimes by a lot, but often about twice). And again, all of this overhead needs to be duplicated for each VM running. Contrast that to say a C application. When an application is compiled in the normal way, all of the code, and all of the code of the libraries can be shared in memory. Far more time and memory can be spared during the compilation phase, since it is only done once. And explicit memory management at least forces you to think about it, even if you don't take advantage of that opportunity for thought (even if explicit memory management has spare/overheads for efficiency, it's a trade-off you can control). And finally, often the reason programmers use scripting languages in the first place is because they are easier - or to translate (in some cases) - they don't know any better. Although they may have the enthusiasm and the ideas, they may just not have the skills to pull it off properly.
Another problem affects all languages - that is the startup time/non-shared data overhead. Things such as font metric tables (sigh, and font glyph tables/glyph cache, now the font server has been basically dropped - remote X sucks shit now, even though networks are much faster), display information, other global state tables, and other data which is loaded at run-time, and could otherwise be shared among applications. This only gets worse when you have many versions of the same library present, and/or completely different libraries which do the same thing. Sure you can run a KDE application on a GNOME desktop, but it isn't at a zero cost, as even basic things like displaying a string of text involves an extraordinary amount of logic and data, little of which will be shared.
Having so many libraries to choose from, and indeed a continually changing set of libraries to choose from, is also a particular problem with GNU desktops (and Windows at least). Add to that - people keep coming up with their own `framework' which will `solve all the problems' in a specific domain, but all it really does is add yet another set of libraries (and versions over time) that we all have to put up with if we want to run a particular application that uses them (or worse, the poor developer is burdened with having to develop and maintain yet-another backend when they could be doing real - and more importantly; interesting - work). Even if the one library is the one everyone uses, new versions seem to come out every year or so.
So the result is, that in 2008 we have a desktop with barely more features than one in 2000, yet consuming far more resources. Tiny little applets which could just as easily been written in any language, are dragging in millions of lines of code and megabytes of memory by virtue of being written in a scripting one. Lots of libraries - many which do the same thing, even just different versions of the same one - often end up being installed as well.
There are at least a couple of ways to get around the scripting problem, and they also cover the shared state and library's breeding like fundie children as well. If you're not using scripting they don't help - but shared state could be addressed using traditional IPC mechanisms (i.e. use a server), but because of the complexity this is often not done. Fixing the breeding library problem in general is tricky - each library needs to be far more disciplined in their design, and make use of ld features for backward/forward compatibility if required. Some duplication is still necessary - competition is generally good - although perhaps application developers should avoid using every new library that comes out just because it is new and promises to abolish world hunger.
First possibility, you have a separate process that compiles and executes all scripts - a script `application server', in today's language. For a stand-alone script, a small client uploads/tells the server which script to execute, and the server sends the results back to the client using queues and/or rpc. Because the scripts are executed in the same address space, they can share libraries, the garbage collector, and other resources. You also have the benefit that if you want to extend your application with scripting facilities, any application can use the same mechanism to run their own scripts. This could also provide a powerful system whereby you can write meta-applications, talking between applications as well, if you design the system properly. Threading is an issue - but it's an issue that only has to be solved once, by people who probably have an idea, rather than clueless application programmers.
The other way is to move your applications to the (one) server. All applications simply run in the same VM/address space, and again all code and much data can easily be shared among applications. Where you need additional non-scripted facilities you either build them in/use plugins, or use IPC mechanisms. And you only have to do it once too. Although meta-application programming is certainly possible, it would have to be an additional layer or protocol that needn't be there by design. And you can't really write an application that has a scripting `extension mechanism' either - since the app is the script.
The first way is sort of how AREXX worked. It can be quite simple, yet very powerful. Nobody wrote applications in AREXX, but they did write meta-applications which literally let completely unrelated applications `talk' to one other. The second way, if taken to the extreme, is something like JavaOS or that M$ thingy that does the same thing.
Hmmm. So I guess one potential realisation of the second idea is Chrome. It isn't a browser, it's an application framework, or rather, an os-independent application execution environment, a meta-operating system if you will. The sort of thing Java was capab;le of, but didn't work so well because it was too fine grained/no central server. The sort of thing Flash is basically doing now, although it's too buggy and also no central server. Probably the closest is the sort of thing GNOME was originally envisioned to be (as i fuzzily remember it - the NOM in GNOME) before being down-graded to basically a Gtk theme - although the glandular-fever infected among them are still thinking along those lines, I think. The sort of thing Firefox always claimed to be, but you couldn't take seriously because we all know what a bloaty pig's bum it was, and still is, even though they've made great strides in the swine's bun-tone. Well, at least the process model in Chrome makes sense now.
Ok, so perhaps I was wrong in my second to last post on b.g.o. Chrome isn't just another featureless webkit browser after all (although it is still too featureless for me). But it isn't just Firefox that has to fear from another browser, it is not just desktop applications that have to fear from another browser, it is the desktop as we have come to know it - and thank fuck for that too.
Ahh well, maybe that isn't the idea `they' had. It has the potential though, if the VM and GC is as good as the claims on the box. And if Google doesn't do it, someone else can - because it's free software.
A Hacker's Introduction
Hello once more, or for the first time.
I figure that as I am no longer a part of the GNOME community, do not use most of their software, and have no interest in it I think my diary on blogs.gnome.org is no longer particularly appropriate. So here is yet-another web diary to add to the 3 or so stale ones I have laying about the place. I will start with a little introduction and history - since I don't have any great code ideas to share today. It isn't in strictly chronological order, but it should cover the important bits.
So, ... the story so far ...
The first computer my family owned was a Commodore 64. We just played some games copied from friends for the most part, and typed in the odd basic programme from magazines and books. I dabbled a little bit with programming - entering sprites using hand-encoded binary and making farty beeps through the SID chip. When one of my brothers bought another one, which came with a disk drive and a pile of magazines, it opened up a whole new world.
After typing in some 'acceleration' libraries from a Compute Gazette! (iirc) magazine I discovered the wonders of machine code, and subsequently assembly language. I typed in an assembler from one of the magazines - it extended BASIC with mnemonics, and you had to implement the 3 (or more) passes using FOR loops(!) and then taught myself 6502 assembly language from the related article and tiny snippets I found in various magazines. We lived in the country and had no access to bulletin board systems or much information, so it was all down to my own curiosity and probing. I can't remember how I learnt the rest of the mnemonics, perhaps a book in the town library or more magazine articles, but I ended up fairly proficient at it. I also can't remember where I got it from - perhaps I typed it in from a magazine too - but I ended up with a 'machine code monitor' as well (a debugger/disassembler) which let me disassemble demo's from magazine cover disks and learn further. I even wrote my own dissassembler (in assembly language of course) and dumped the entire BASIC and Kernel ROM's to a printer to learn further (reading the code subsequently I couldn't understand most of it ever again). Other bits and pieces were an interpolating printer driver for GEOS, and lots of other little useless toys, graphics ('vector' graphics, sprite multiplexers, raster interrupt stuff), and sound routines (a primitive sequencer iirc). I still used the machine to type my first few essays for uni.
Of course as a Commodore-head I dreamed of an Amiga, and finally got one during my first year at uni. Of course the first thing I did was borrow a book on 68K and hand-assembled a matrix multiplication routine to accelerate some 3d graphics in the horrid BASIC the Amiga came with - at least it came with a language though. I'd heard about fish discs from magazines, so I soon got the fish disk with A68K on it, and slowly collected other tools and got to work - learning M68K and hacking up code on a floppy-based system. And yet it booted faster than my latest GNU machine does (and just as well, it sure rebooted a lot more often too) and the editor was more responsive. Graphics routines, interrupt queue-based blitter 'engines', 2d stuff, 3d stuff, even a mod player routine. The golden age when all was learning and no other distractions like life to get in the way. Using snippets from magazines and books (the university necessarily had a much better stock of technical books) I read up on assembly language and hardware and i/o registers and all the rest.
Then I got a modem. At this point the internet didn't really exist - usenet was around, and ftp was just starting to pick up - but you could only get it at uni. That opened up access to other software, and other people, and I got in touch with a 'demo group' who wanted a coder for demos. We never really did anything to speak of, but we had a lot of fun being creative, and trying to mix real-time code, graphics and music together. Along the way I got an Amiga 1200 - having a hard drive was a nice step up.
It all lead up to a fateful Easter Long Weekend where I managed to stay up for 73 hours straight working on our demo, attended the demo competition, got disqualified for a silly optimisation mistake which had it fail on the target hardware, slept an hour on a plastic chair with my head on a Laminex table, had an external floppy drive stolen, set up for a video presentation at a rave/dance party, blew the 'blue' output of my 1200 (everything turned a yucky mustard grey/yellow) from a dodgy home-made video cable which shorted out, stayed up all night helping run the visuals at the party, and nearly had several accidents being driven home by a friend who all but fell asleep a the wheel mid Monday morning. I think I've been tired ever since, but maybe that's just a coincidence (sleep aopnea diagnosed years later). Although I kept coding on it, and even bought new hardware, I think the golden age had passed - for me and Commodore.
I got the ROM Kernel Reference Manuals and started writing more OS-friendly code rather than hardware-banging stuff. It was still in assembly language though. A little multi-threaded file manager I never finished, my MOD player routine came along about then. I dabbled a little in extending Amiga E - but couldn't maintain interest - my contribution was a 3-d library written in assembly language. Amiga E was a bit like Pascal but had an outrageously fast compiler - 45K of machine code - written by the clever and nice bloke who also wrote the first BrainFuck compiler - 1024 bytes of compiler which generated executables directly. I wrote a freeware extension to the AmigaOS's multi-media platform - datatypes - which read and displayed GIF files much faster than anything else. I learnt a lot of stuff about the importance of latency vs throughput, asynchronous I/O, threading and optimising code (it was all in assembly language), even what OO is all about.
Linux started to show up around me about then, along with the growing internet. Many GNU tools made their way to AmigaOS too, and I started to learn about the FSF (at first I couldn't believe anyone would give away such software - let alone the source-code, none of which I could use anyway). Eventually a mate had a cheap motherboard going and this new 'Linux' thing seemed to be getting more usable, so I bought a PeeCee and stayed up all one night with him trying to get RedHat 3 (I think?) going on it. After failing to get anywhere, I tried slackware and it worked. Suddenly I was in the world of pain that is the IBM compatible PC! Things get a bit hazy there. Work, life, and whatnot, too many very late nights doing geeky stuff with mates or IRC or other stuff. I can't remember what I used to use the Linux machine for other than internet access and compiling applications. I had learnt C by this time but I can't remember if I even wrote any software - other than for work, which was the occasional portability patch or hacked up Perl scripts. With a bunch of mates I helped run an ISP for a while, I lost some money, some lost skin through stress. A bit of a dark period I guess, which put me off the idea of ever wanting to run a business.
After the positive experience with the gif 'datatype', and sending in the odd patch here and there at work, I slowly became more interested in writing software as a hobby for free again, but this time including the source-code (I regret now that I never released the zgif.datatype source-code, such as it was). The KDE project looked very interesting - but I don't know if it was the GPL issue back then or perhaps C++ scared me off, but I never tried it again. I became involved in the GNOME project around 1998 - working on what would become the (now long-gone) second iteration of the GNOME Terminal application.
I started work on libzvt because I had had the idea from some terminal application on the Amiga which didn't have to 'copy' the lines it displayed. It just used Copper (a video co-processor) tricks to scroll by just telling the video DMA which lines to display, rather than re-ordering them so they displayed in the right order. Of course, on an X Windows display no such facilities were available, but I used similar ideas to optimise the screen update and minimise memory allocations - and by the end it was a pretty bloody fast and solid piece of code. Since I did much of my work on a Solaris box - which had an awfully slow malloc, re-using malloc's was a big win. Unfortunately nobody else working on the code-base ever understood why I did it that way, so patches came along and removed this desirable behaviour, and slowly I lost interest in maintaining it - because of a bit of a loss of direction for it, because I was too busy on Evolution by then, and because my ownership of the application had been undermined by someone who wouldn't let me do a few things I wanted. However, before then my work on libzvt/gnome-terminal got me a job working for Ximian (as #8) on this new 'Evolution' application.
Actually I never really had much of an interest in Evolution (I was a die-hard Elm user!), but I ended up working on it for 6 years. Rather interesting times. Working for a startup like that was really a unique experience. We worked like maniacs for the first year and produced tons of code - I don't know about the other guys but I never really expected to make gazillions like some other startups, but it would've been a nice bonus - so it wasn't the money driving me. I'm not sure i'd do it all again but certainly some of it was worth it. I do wish I'd known what I know now about programming and design, although I don't think we really made too many mistakes given what we were working with and aiming for. I always disagreed with the clone-MS aspect of the project - we all thought we could have done better, and I still think we could have. I would also have used CORBA more, and more effectively - but without a fully working ORB in the early days, and since most of us were busy just getting things working, I guess it was always going to end up the way it did. I also would have kept things simpler - but it is hard to know how simple something should be before you've done it once or twice. I think the project worked quite well considering few of the developers resided in the same city as any other, let alone the same timezone. Although it wasn't all sweetness and light ... I had a lot of nasty conflicts with inexperienced management (not all their fault I'm sure) and was wildly out of touch with many goings on by virtue of being literally on the other side of the planet. It was probably saved by virtue of the code-base being easily compartmentalised into chunks one or two developers could tackle in almost total isolation. And I got along reasonably well with Jeff on the mail component once he came along.
Novell buying Ximian was a double-edged sword. On the one hand they had money to keep paying us, on the other they started pushing uninteresting things like Groupwise backends into the mix. We already had a pretty low opinion of Groupwise from working around it's external protocol bugs (not as bad as Exchange mind you), and they threw inexperienced programmers at the task who didn't write very good code to start with, so the opinion didn't go up. Also being part of a larger organisation meant simple one-on-one management and a flat management structure were out the window. Now you had the over-paid baggage of a HR department breathing down your neck for pointless 'objective management tool' reports which they made you fill out on fear of death (which I might add you needed internet explorer to access properly - or maybe i'm confusing that with the expenses system which needed the MS proprietary Java). Other niceties were the yearly business ethics forms you had to fill out to cover their legal arses, and being told in no uncertain terms that you were there to work for the company and they had no obligation to give anything back in return. Even if they paid well, this was a foreign (and offensive - workers are not slaves) concept having grown up in a blue-collar slightly-pinko family during the Hawke/Keating years.
Ahh well. Anyway, we kept struggling on. I wrote a lot of code which never got used (actually some of it did eventually, in a changed form anyway), which is a pity, but by the end I was very burnt out on the project. Novell didn't really have anything I wanted to work on, or wouldn't let me work on projects I thought were potentially interesting at the time (e.g. Mono). And I couldn't come up with a project that they would let me work on either. I despised HR, and no longer had any interest in Evolution. So after a fairly extended and quite complete hand-over period of the Evolution code-base to the Bangalore team (I think I did quite a good job - although compared to any previous hand-over efforts, any job would've been an improvement), a fortuitously timed redundancy let me let go of all that and move forward. I found the hand-over quite cathartic - during the last few weeks, as I 'brain-dumped' 6 years of background into a few wiki pages, I felt more relief as each paragraph and section ticked off. Since then I haven't even run Evolution - I still cannot bring myself to, nearly 3 years later.
Since then, I've been working (for money) on a .NET, WPF desktop application! Bletch. Well, at least I can now say with confidence that MS stuff is awful. What a badly documented, sourceless, buggy, slow, dead-end piece of still-born technology this is. Well it pays the bills, but I will have to see what happens after this - I don't think I can keep it up.
Apart from work, I was pretty burnt out after leaving the Evolution project. I had a fairly long break from just about any sort of hobby coding. I'm still not sure where i'm headed. I've become more 'militant' Free Software - I never liked 'open source' but now I see it as highly damaging. Commercial interests are muddying the waters and trying to impose their corrupt way of business onto Free Software, and it really stinks. I joined the FSF.
I dabbled with AROS a little bit - but although they seem like a nice bunch of guys, I couldn't get terribly inspired. I wrote them an AVL library though - and that got me interested in C again for a while. I also dabbled with some other ADT's, and memory allocation routines, just some nice raw code to try to blow the c-hash cobwebs out of my head - and it was a true joy to return to Emacs. I played with literal programming systems along the way, but I can't make it work for me - the authoring is ok but debugging and writing libraries is not so nice. I even read a bit about ADA and Scheme and Lisp, although none of those inspired me either. I looked at writing a vorbis decoder. I wouldn't mind learning more about signal processing, and I just wondered if I could do it from the spec. I didn't get very far though - it has quite a nasty bit-stream format, and that scared me away. I did quite a bit of work on a content management system/blogging thing. I had some ideas on database versioning and document processing I wanted to play with, but have exhausted most of them now. Cheap branches, automatic indexing/toc/cross reference generation, web-friendly and print-friendly and not too author-unfriendly. I will get back to it if it becomes fun again, but who knows when that will be.
I have a Playstation 3 with Ubuntu installed, and have written a few little CELL routines. That is a lot of fun - I really love the architecture - it deserves success outside of super-computers and game consoles. But I can't really think of anything useful to do with it yet. At least, something useful I can do without it being too big a project for one man and his spare time to contemplate. I have some job-queue stuff, a bi-linear up-scaling routine and YUV to RGB converter. Just with that patched into mplayer makes quite a difference.
Well, that is pretty much got to today. As for the future, well more to come no doubt.
Copyright (C) 2019 Michael Zucchi, All Rights Reserved.
Powered by gcc & me!