Saturday, September 30, 2006

I preyed

Ok , that is one more game over - Prey.
And boy , what a game !
It starts off as a bit too lame - and the initial few rounds try to familiarise you with the concept of portals , walkways , gravity flipping , etc and frankly I was irritated with its simplicity initially and did not have much hope : a lot of irritating puzzles and the like - if I want to solve puzzles , I will go do something else , not play FPS !
After you are well into the game , the fun really starts ! The music is really good and totally pumps you up in some of the tight moments of the game.
The initial set of 'high power monsters' fight pretty lame : though they obviously have awesome power - easy to dodge and attack ... but as the game progresses two things happen : they seem to play better and many of them come at high numbers.
The level bosses at the initial episodes become common minions in subsequent level boss fights !
You keep playing with the notion that the 'keeper' is this uber boss ... and after killing him , pretty tough considering he summons all the other level bosses , you think - game over : hahaha like hell :-)
The fun really starts now ! You have keepers all around the place as though they are the 'common' hunters (the 3rd level or so enemy who use scoped guns) ...
The final boss is pretty tough to kill - until you figure out how to kill her that is ... after which , it is 'tame'.

All in all , very enjoyable game , albeit a bit lopsided in its difficulty - there are places where the enemy comes in droves in small places making it really tough ... and some others where the level bosses are so easy to slay !

0 Comments:

Post a Comment

Links to this post:

Create a Link

<< Home

Friday, September 29, 2006

Be Cool

Just saw the movie 'Be Cool' - ok , later half of it anyway :-)
I have been a long time fan of John Travolta ... and he is rocking in the movie.
The movie has a wonderful cast, really unexpected twists and slapstick , and some geniunely funny dialogues.
But the person who really struck me as amazing was Rock !

To be honest , I like Rock and Vin Diesel a lot , but usually both of them come in action flicks.
In this movie , I was really surprised to see Rock's acting abilities !
Ok , wont ruin this with unnecessary praise of the movie or the actors , but if you have not seen it - it is highly recommended ...

0 Comments:

Post a Comment

Links to this post:

Create a Link

<< Home

Saturday, September 23, 2006

Wear sunscreen

Came across this a while back ... http://www-math.cudenver.edu/~hgreenbe/sunscreen.html
Considering how much I enjoyed listening to this (Baz Luhrmann version) , I enjoyed the page too :-)
Somehow feels nostalgic when you hear it ... I know of atleast a couple of reader's of this blog who love this 'song' (if you can call it that).

3 Comments:

Blogger  said...

Nice.

After we moved to blogger beta, the Atom feed URL changed. The previous one doesn't work anymore, so bloglines stopped monitoring them. I missed a few of your posts because of that.

9/26/2006 08:22:00 PM  
Blogger vijaya said...

Hi Mridul,

It was an interesing read - led me to read up a bit on Kurt Vonnegut himself on wikipedia - http://en.wikipedia.org/wiki/Kurt_Vonnegut - and got to know that the address was actually given by Kofi Annan :)...

Anyways... was a good spurt of reading on an otherwise uneventful day at work :)

Hope B'lore's treating you well and that you're returing the favor:)

Cheers!
Vijaya

9/27/2006 12:12:00 PM  
Blogger Mridul said...

@ kousik
The move to blogger beta has really pissed me off actually.
Not only does it not allow me to comment on some blogs , but it has totally messed up my feed and on top of everything , it is not reversible !
I am sure they want people to move to their new system , but making it one-way even before it is stable is an idiotic idea.
Sorry about the feed change - I was more surprised by it than you were :-(

@vijaya
You dont know about 'Kurt Vonnegut' ?! :-P
I am sure your hubby loves him - as did half the college ;-)
Somehow , I never got time to read him ... but I do plan to start sometime soon.
If you get the time , there is an interview of his on the 'Daily Show' in www.youtube.com : totally hilarious ! http://youtube.com/watch?v=iI-jlbyDf0k

9/29/2006 11:21:00 PM  

Post a Comment

Links to this post:

Create a Link

<< Home

Saturday, September 16, 2006

GERI'S GAME

A lot of you would have seen this before - if not , watch it here : http://www.youtube.com/watch?v=mCrXIbPzejw
It is absolutely , mind boggling , brilliantly fantastic :-)
This much praise ought to convince you to watch it ? :-P

*NOTE* And NO , I dont say this cos it is about chess ;-)

0 Comments:

Post a Comment

Links to this post:

Create a Link

<< Home

Thursday, September 14, 2006

Into bit twiddling

*Note * Please avoid if you are not interested in technical drivel.

Let us continue with the earlier topic which I digressed from.
Now , bit twiddling at the low level is just that - fool around with bits and bytes to do things more 'efficiently'.

Like the classic bitcount :

Naive :
int bitcount(unsigned int mask){
int retval = 0;
int indx = 1;
int count = 32;
while (count --){ retval += (indx & mask) ? 1 : 0; indx <<= 1; }
return retval;
}
Bad indeed !

Better :
int bitcount(unsigned int mask){
int retval = 0;
while (mask){ mask &= (mask - 1); retval ++; }
return retval;
}

Just to show the relative differences between both algo's , you do have better options :-)

Similarly , you can have one for finding out the lsb (least significant bit).
a) iterate from bitpos 0 to 31 ,
b) Extract only lsb (mask &= (-mask);) use '&' to identify which all block's have a bit set : a few '&' , compare and '+''s (all relatively low cost when done in ctx of registers).
c) Have a lsb_pos[16k] (or so) array which hold's the lsb positions and use a few conditions and shifts to reduce problem to a array lookup + offset.
d) drop to assembly and use that (single instruction in a lot of cases) ;-)

If you are going to have a random input space , then iterating from 0 to 31 makes no sense for finding lsb. On the contrary , if you are sure that there is high probability of success within the first few lsb's , then a brute force approach from 0 to 31 might be good !
Similarly , if you are going to have a large number of array's and the lsb_pos array might get kicked out of L2 , then relying on that approach might not be too good - (b) or even (a) might be better ! A cache miss is quite expensive in modern processors.

What I am getting at is , runtime aspects of a piece of code is not just dependent on the complexity - but also on various other aspects. Ofcourse complexity plays a large part (I will show an example below where it is inverse) and for most cases , it should be the first tool to inspect any piece of code/idea - but dont rely on it blindly. It is asymptotic behaviour in the face of large input space : which rarely happens in actual practise.
Even when the complexity of the different approachs might look the same , the runtime aspects could be markedly different : more so if start considering the actual implementation aspects.


Now to move on to a slightly counter intutive example.
Consider that at every so often , you will generate a list of probable options and 'tentative weights' associated with each. Each weight is a heurestic guess as to how good the option is - and you will essentially use this to select the option and do a search to find out the actual value (those familiar with game theory can consider this as weighting the options in a minimax or alphabeta tree).
If you find a option which returns a value above some minimum - you are done and can 'return'.
Also assume that your 'guesses' are reasonably good (but not fool proof ofcourse ;-) )
Now , if you are given this problem , how will you pick the option to be tried ?

When I ask this question to interviewee's , without batting an eyelid they answer with all sorts of sorting algorithms : ranging from bubblesort to heapsort and some would even go as far as to write down the code in their enthusiasm to get an easy answer.
But , they miss a fundamental problem - they did not study the probabilistic aspects of the problem space properly.

In this problem , if the first option is the right choice it makes no sense to search the others at all !
And if you 'guess' is a reasonably good one , then the probability of you needing to go all the way down to the last option is pretty low !!
So , in this particular case , a brute force "scan for highest and return" approach might actually make a lot of sense as compared to first sorting all options and then picking from the sorted set.
Hence , your order 'n ^ 2' approach could work out better than 'n log n'.
Ofcourse , implict in the 'n log n' approach is the 'cost' of doing the actual sort : both in terms of processing and memory.
Actually , this is the approach used by most chess programs in selecting the best move from the generated set - so , no I am not yanking your chain about this problem :-).

These might look like corner cases , and maybe they are corner cases : but understanding your problem and the runtime aspects go a long way in designing a solution which will have low 'cost' (performance tuning , rewriting , etc).

Recently , I was faced with another interesting problem - not really a problem , but more like reviewing a design. There was this proof of concept being discussed to add something to the server. First of all , it was something which would be one of the many options and secondly it was not going to be something which would be 'touched' too much.
The interface was going to be a simple 'send to all servers' and another being 'notify me when you recieve anything from other servers'.
Simple 2 methods - add another to initialise and the whole subsystem would have like 3 exposed interface methods.
This new subsystem would essentially initialise an external 'provider' , configure it with some configuration values (typically as part of init() ) and then it would be ready to send and recieve.

But voila , the design presented had like 10+ classes , a bunch of interfaces , etc : all for what a simple util class with 3 method's would have sufficed : that too for something which was initially defined as potentially throwaway work.
It was overdesigning at its best : without even understanding what the problem was to be solved , what the infrastructure provides , what were its contracts with the api users , etc.
Worse was to convince them that it sucked - the design had all the common 'design patterns' in it , so how could it be wrong ! :-)

Somewhere down the line , a lot of developers forget that it is not our job to design marvellous class hierarchies and abuse design patterns. The intent is to solve problems - and everything else is an aid in this quest : if you find that something is making your life miserable , dont do it !
Introducing 10 classes to do something which could be done by 1 , and offering nothing of utility to anyone - but just to make the code 'look good' is overkill.
I find that a lot of java programmers in particular are prone to this sort of overdesigning malady.
Sad indeed is the state of a lot of projects just 'cos of attitude like this.
You should be pragmatic in your design and yet make sure that you do not design yourself into a corner !

So moral of the story ? No moral - code a lot and code well !
Learn from the code written by more expierenced developers , and you will learn a lot.
Programming still remains a craft - which is why some codebases are very intutive and a joy to work with while still remaining highly performent , while others suck ass.

0 Comments:

Post a Comment

Links to this post:

Create a Link

<< Home

Friday, September 08, 2006

Java Puzzlers

Just finished "Java Puzzlers : Traps, Pitfalls, and Corner Cases" By Joshua Bloch and Neal Gafter.
Very interesting book - learned a lot of interesting things , and clarified a bunch of others :-)
Takes a bit longer than a 'normal book' - just too tempting to solve all puzzle's : leading to me breaking my head on a few of them and having to give up finally after a long haul.
Highly recommended.

0 Comments:

Post a Comment

Links to this post:

Create a Link

<< Home

Wednesday, September 06, 2006

"The Long Strange Trip to Java"

Extremely interesting site - http://www.blinkenlights.com/classiccmp/javaorigin.html
Documents what Patrick Naughton , a member of the "green team" at Sun which did Oak - which later became Java , remembers about the evolution of Java and its initial days.
Worth a read !

0 Comments:

Post a Comment

Links to this post:

Create a Link

<< Home

Monday, September 04, 2006

bit twiddling ...

I find myself at home relaxing most of the time with nothing to do ... and it is irritating !
Luckily , I still have a cygwin compiler here at my parents comp and some old chess code that I had written ... so it is back to chess !

Typically when you write code , the general attitude is : get it right first - and later worry about performance. This does help in a lot of cases and as a general thumb rule , it might be fine.
But from my experience , I see something this happening to a lot of novice chess programmers :

A) You finish your code , profile/optimise a few times - and then hit a brick wall.
You end up with this inefficiency spread across the whole codebase such that nothing registers as a hotspot , but the whole thing sucks.
It really does make sense to take performance also into account when writing code - premature optimisation is really an evil : but inefficient code is not the alternative !

If you are going to use a large number of lookup tables such that they will get kicked off L2 - the memory latency and worse tlb thrashing will kill your program.
If you use branches without bounds , the BPT will not be able to accomadate your loops and you will end up messing up your pipelines.
I have seen people muck up even simple things like loop invarients - which might not be simple enough for the compiler to optimize and yet stupid enough for the human to change.
Compilers are pretty dumb when it comes to a lot of optimisations - and if you dont want to sprinkle your code with assembly (I hate that) , it might make sense to 'guide' the compiler to generate optimal code .... for example , like make your code ammendable to be converted to cmov (again a rule of thumb :-P) , reduce branches if you can help it ... sometimes the overhead of a few extra cycles spent in executing some code is worth it if you can preserve the pipelines.
I remember once removing a branch/assignment with a shift/and ... helped quite a bit - though on face of it , it looked more expensive !
Read the processor optimisation manuals - even if you code in a high level language !
The insights on how the processor works in enlightening ... dont rely too much on it while coding , but knowing how your code might get executed helps in writing better code.
Look at the assembly listing of your program once in a while - especially the parts where most of the time is spent - you might be surprised and shocked at what you see :-)

B) Which brings us to next major goof-up : Micro optimisations !

On the flip side , I see programmers who are obsessed with optimisation - so much so that they write 'wonderful' tight code for their small functions , and try to squeeze the most out of it.
They hack around with assembly for a function that is rarely called , forget the more fundamental design aspects and microoptimise their inefficient design.
When they sew everything up , the final program sucks ass.
Their movegen would be blazingly fast , their makemove would be efficient - their perft or alphabeta search goes like the snail.
In general , writing inline assembly is not very good from an optimisation point of view - you are messing up what the compiler could have done and end up ruining it for the compiler : dabble in it only if you are darn sure there is no other alternative.
Example , might make sense to use it for bsr/bsf (find lsb/msb) .... alternative would be a bunch of cmov's.
Ofcourse , taking all these in isolation does not really make sense ... the rest of the pipeline where it is executed does play a vital role in modern processors.
If you can parallelise your code such that multiple pipelines can potentially execute the code , you could end up with a nice speedup !
The way your code gets executed , the context in which it is getting executed , how to optimise the data required for it , the data layout , memory allignment , etc play a very crucial role ...


Though I spoke primarily from the point of view of C , most of the above are relevent to other high level languages - even once which run atop a virtual machine.
Though you might be tempted to treat all these as a black box while writing code , but if you know how the whole system might eventually behave , it helps in writing better code ...

What I have found is , most of the rules of thumb are just that - rules of thumb.
You should know when to stick with them , and when to break them ... and there in lies expierence : blind faith in these 'rules' will only lead to problems : and worse , it is tough to sometimes convince some of these proponents that they really are shooting themselves on their foot !

And most of the above is just that : rules of thumb ...

I wanted to talk about bit twiddling in particular but assumed it make sense to briefly ramble to clarify that what I talk about next is subjective and extremely context sensitive.
Whatworks for one need not work for other !

0 Comments:

Post a Comment

Links to this post:

Create a Link

<< Home