Author Archives: Maurício Gomes

The awful experience of firing someone I hired for the first time

I hired people before, but I never had to fire them, also I fired people before, but it was not people I hired. This time I had to fire someone I hired, and to me it felt like one of the worst work weeks I ever had.

At the company where I am currently CTO, we needed someone to help with some minor programming tasks, for example updating our programs when the SDK is updated, or implementing some minor features, we had to choose what sort of employee to hire.

The choice made was to hire an intern, the first reason is that a full-time employee would be too expensive to us, to do minor tasks like these, the second reason was that our office is in walking distance of Universidade de São Paulo (the most prestigious Brazillian university both inside and outside Brazil), it was just a matter of hiring the intern formally with USP, and actually treating him like a intern (not like a “cheaper” employee like some places like to do).

We ended putting up our advertising in the middle of the school vacation, this meant that few people saw our ad, but some did. My hiring process was simple, first he guy had to pass the “Fizz Buzz” test, and then the candidate would be interviewer by me and the CEO.

I ended hiring a guy, let’s call him “Bob”, that is not his real name (not even close), and he worked for two weeks before I ended firing him. Those two weeks were one of the most stressful weeks I ever had, and the day I fired him my week got ruined, I fired Bob in the middle of the week, and I spent the rest of the days upset, grumpy and bothered with many things.

The first feeling of firing Bob, was that I was firing a dude that wanted to work a lot, Bob is one of the most persistent and organized persons I ever met, he really wanted the job, he was always on time (something very rare in Brazil), and he was always busy, constantly trying to do things, anything, also Bob is young, just starting his career, and firing someone with all those attributes made me feel bad, I felt like I was hurting someone, and making that someone sad, and maybe ruining his career or something, in fact this is the reason that I ended firing him earlier than I promised the CEO, and the reason why I was very honest and tried to explain how he could improve.

But still, although this was a pain, after firing the guy it passed, yet my week still felt ruined, and I was still in foul mood, and I had to figure out why, my bad mood was making it even worse, I was upset for being upset. The end result was me trying to figure why was that, and I reached a conclusion: I had failed.

I hired the guy, and he was a very poor fit, despite being a maths major, he did not knew that “difference” meant minus (he thought it was a division), he had no idea what a tangent was (but knew the calculation formula), and although his resume listed he as a C programmer, he had no clue what a pointer was. I had to fire him because many times I spent so many time explaining him how to do his tasks, that I figured that if I did them myself it would save time, the case was not only a failure to live up to his wage, it was that he made me waste time, his value to the company was negative, and I felt like a failure because I hired him in first place.

Why I hired the guy? It was because I was too desperate, there was not enough candidates, and he was the “less worse” candidate, he almost flunked the Fizz Buzz test, not knowing that the else statement in C applied only to one if  statement, and not to all previous ones, he claimed that he was rusty after a year working abroad as a pizza cook, and in desperation, I let him pass, the other candidates were just outright terrible, at least he knew what he wanted to do, only did not knew how.

This is why I felt bad, I cheated, the hiring process was fudged so I could hire someone when none was actually worthy being hired, I hired the “less bad” guy, and that was still bad enough to create problems for the company, I felt it was like a personal failure, it was not Bob’s fault, it was my fault, I allowed him to be hired, despite knowing that I should not have hired him, I allowed hope and desperation to lead my hiring process, and not the rules I had chosen, or even common sense.

It was a painful lesson, I fired the guy Wednesday, and ended skipping work Friday because I felt so bad that it compounded some medical condition I had and I got sick, from now on I know, never, ever, act out of desperation if there is a choice, the company could very much well wait for someone better, there was no need to do what I did, I finally understood why you should always try to hire someone better than you, why a “A player” hire another “A player”, instead of settling with a “B player”.

Reference

Fizz Buzz test

Testing performance of C99 types within a game data structure

This post has some test results of several types introduced in C99, also I will explain here a little bit about structure packing, and why I think I got the results I’ve got.

Context

I was creating a shooting game, and needed a way to store bullets. I decided to use the C fixed size array instead of using a resize-able array or a list, the reason for this is simple: I will have to iterate trough all of them every frame, a linked list would mean a performance hit there, also later as I will show, a pointer would make it MUCH bigger, and finally, I think the performance hit of deleting bullets (ie: overwriting a invalid bullet with the last valid bullet of the array) would be smaller than the performance hit of iterating with pointers.

So, this is the basic structure used for my bullet:

I used int for x and y position as “default” integer value, also this is because a bullet cannot be in a “half-pixel” position anyway, thus there is no need to actually use a floating point number.

I decided to use short for the speed variables to use less memory, since the speed is always quite small anyway. And finally bulletType is supposed to be a enumeration, I used a unsigned char just to make sure it will use less memory too, the enumeration size is implementation dependant, and might be even some crazy size (like the pointer size, in a 64-bit machine, thus 8 bytes).

Some people then will come with pitchforks after me along with the entire village while chanting: “KILL THE PREMATURE OPTIMIZATION MONSTER!”, well, indeed it is premature, but I wanted to learn a bit, and we must remember that although we have some crazy amounts of memory on today machines (compared to some few kbytes when the first R-Type was released back in 1987) we don’t have infinite cache, the larger your memory use during calculations, the bigger the change for a CPU cache miss.

The Benchmark

The benchmark is simple, I created several versions of the above structure, and then I for each of them I made a simulation of the game calculating bullet position. For the sake of simplicity I did not put on it bullet deletion, or rendering, or any other logic.

So, what are the results, and their implications?

The int size is 32 bits, the short size is 16 bits, and the char size is 8 bits, those are the “default” integer types on my machine and OS combination (I ran the test on a Windows 8 on a Intel i7) The default struct had as size 16 bytes (more on why it is not 13, later)

On my machine the least, exact and fast variations of integer ended having the same size, that is the smallest size possible, so if I called least, exact or fast 8 bits, I ended with 8 bits. The structures that used those sizes had 8 bytes.

I also made a test creating structs where all sizes were the same, one for 16 bit (struct size was 10 bytes) and one for 32 bit (struct size was 20 bytes)

For floating point, the float type has 32 bits when stored in the memory, and the double type has 64 bits, their structures use 20 bytes, and 40 bytes.

All of that had the same results in both 32 and 64 bit compiled binaries.

Now for the calculation speeds, the fastest was the struct with all members having 16-bit integers, the average time was 0.0126, the next was all the C99 integer types, at 0.0141, then the normal integer types at 0.0184, next was single precision floating point at 0.02199 that was thus almost tied with 32-bit integer at 0.0220, then we had the double precision struct at laggards 0.042

Structure Packing

Why the structures we used were always bigger than the sum of all its members? The reason is fairly simple: Alignment. Most processors take performance hits when they have to read things in the memory not aligned in ways they like, the most basic way of alignment is align things in the memory in the size of the processor word, for example a 32-bit processor would require everything in the memory to start their bits in a bit multiple of 32. The other mode of alignment is what is when a type is aligned with its own size, a 8-bit type start aligned in multiple of 8 bits in the memory, a 16-bit in multiple of 16, and so on. This is called “self-aligned” types.

x86 processors (ranging from 8086, to the current Intel i7 and AMD FX ones) support self-aligned types, ARM (used in iPhone, most Androids, most handheld consoles and some other devices) also support those.

Theoretically then, my default struct that started with a int of 4 bytes, and ended with a char of 1 byte, would use only 13 bytes, so why the extra 3 bytes? It is to ensure that in case the structure is used on an array, the first int of the next element will be aligned properly, thus the structure has a size of 16 bytes, because this guarantees that if you put several of them in the same array all elements will start aligned properly, with the ints starting into addresses of four bytes.

Calculation Speed

As for calculation speed, we could notice that the speed was mostly tied to the size, largest structures were the slowest, this is because they use more memory, and increase a chance of a cache miss, modern processors have a limited amount of cache, and if you change data faster than they can populate the cache, specially if your data is too big, you will get a severe performance penalty.

Also it is interesting to note that the 32-bit performance was the same for integers and floating points, in old processors usually integers were faster, but modern processors usually have separate parts to do floating point calculations, and many can actually do integer and floating calculations at the same time, thus potentially doubling the speed of your program.

Finally, we had the 16-bit + 16-bit sum go slighly faster than the 16-bit + 8-bit sum, despite being bigger, I don’t know yet why this happened, this result surprised me, if someone knows please send me a e-mail and I will update the article, also I plan in doing some Assembly delving when I have time to see if I can figure this.

Reference

Great article from Eric S. Raymond explaining structure packing and memory alignment

How Lua logical operators work

Today a intern at Kidoteca, asked me a interesting question while learning Lua:

“Why print(4 and 5)  prints the number 5?”

This took me a little by surprise, although I use some Lua logical operators hackery, I did not really remember how they worked… So I went to read a bit the specifications, and found the answer to be interesting enough to warrant this article.

Lua has three logical operators, they are and , or  and not .

Conversion to boolean

They follow the same rules as the control structures to consider what is true , and what is false . Lua considers everything true , unless it is the boolean value false , or nil .

Logical operators behaviour

not is simple enough, it converts whatever is to the right of it to a boolean value and negate it.

and returns the first argument if it is false, otherwise the second. We will explore the reason for that, and how it can be exploited later.

or do the exact opposite of and , it returns the second argument if the first one is false.

Short-circuit evaluation

Also, both or  and and  stop evaluations as soon they can, meaning that in case of an and , it does not evaluate the second argument if the first is false, and or  does not evaluate the second if the first is true.

The last particular property is useful for many things, in my opinion the two most important are performance savings (ie: stop checking stuff if you already know the result) and checking something about a data inside a table.

For example, if you want to check if myTable.myData  is loaded, you can just check if myData  exists, but what if myTable  does not exist? You get a nasty runtime error! The solution is use the and  operator and also check for the existence of myTable .

OR and AND (lack of) type conversion

People coming from other languages, like C for example, might be confused: “Why Lua logical operators return an operand, instead of returning true  or false ?”. This is because there is no need to do otherwise. Lets evaluate ourselves the following example.

The first part is decide evaluation ordering, in Lua the operator or  has the lowest precedence of all, being the last thing evaluated, and  is just a bit higher, and not  is one of the highest, thus not  is evaluated first, if we edit our code and put the result in place it becomes:

not print  became false , because print  was a function, and it became true , and the not  negated it, becoming false . Now the left-most and  operator is evaluated.

The operands were 5 and false, since 5 is true (because everything that is not false , or nil , is true), and the and  operator in that case returns the second argument, it returned false . Now we will evaluate the second and .

This time we had as operators "string"  and false , since any string is true, and  returned false  (that was created by the not print ). Now we evaluate the or .

The or  returns the second argument when the first argument is false, since the second argument was false too, the result is false, so the program now will print "false" .

And in case of true? For example 1 and 5 . In this case, and  will return 5 to the if (because 1 is true), and when the if clause evaluates the 5, it will convert it to true . Note this is two evaluation steps, different from how C++ works for example, where the and  operator will check if both sides are true and return true  or false .

Logical operator hacks

Logical operators because the way they work in Lua allow some interesting constructions, the most popular one is to have default values.

Like we said before, or  returns the second argument if the first is false  or nil , this mean that you cannot use this constructions if you want to keep false  (or nil ) as a valid value.

The other use is imitate C ternary (?:) operator, x = a ? b : c;  results into b  getting assigned into x  if a  is true, and c  getting assigned into x  if a  is false.

It works by first evaluating the and , if the first operand is false, and will return a false, that in turn will force the or  to return the second operand. If the first operand is true, and  will return the second operand, if the returned operand is not false, or  will return it, this also means that the second operand must not be false  or nil , otherwise this construction will fail.

References

Lua Manual

Programming in Lua Chapter 3

Before buying a laptop, check if it has nVidia Optimus, its purpose is infuriate you

So, you need a performance laptop, you choose only the best components, a Intel i7 with how much cores you can fit, the best motherboard available, 8gb Ram, and a nVidia GPU (because we all know that unless you want to mine Bitcoins, nVidia GPUs are awesome).

Then you launch your performance-needing software, be it a game, or a renderer, or a science application, and… it runs so slow that you think it is frozen, and your notebook fans go crazy, it is obvious that something is very wrong.

Congratulations, you just met nVidia Optimus, its purpose is to infuriate you and make you consider AMD, there is no other reason to exist, theoretically its idea is save power, what it do is switch between a crap but economic Intel VGA chip, and the nVidia GPU as needed, except that you bought this machine because you want performance, who the hells buy a i7 to save power? You buy a i7 with nVIdia GPU because you WANT power! You want to guzzle electrons to do blazing fast calculations, but instead you get one of the most broken systems ever invented.

How broken it is? Well, the first time I realized I had nVidia Optimus, was when researching why the hell almost none of games on my computer worked properly, games are all buggy, very buggy, and slow, I opened some game settings and it was blaring to me that the game was not intended to work with Intel VGA chip… And I was thinking the store ripped me off, and that this ASUS N46VM machine was actually pirated or a rip off and it had no GPU… But actually, it has a GPU, after some googling, I found out that it supported nVidia Optimus (a “feature” so good that is not advertised, in times where people advertise even that the computer has sound at all…), and it was explained how to check what video chip it has choosen when running applications.

I went to nVidia control panel, and found out that it defaulted almost every single game installed to Intel chip. With one exception: Wolfenstein 3D. Yes, nVidia Optimus, that theoretically can choose the appropriate GPU, decided that Fallout NV, Dark Souls, The Witcher 2, and several other heavy games, deserved Intel VGA chip, while Wolfenstein 3D, that one for DOS, that one made before DOOM, needed nVidia GPU.

At least, this is “fix-able” in the sense that you can manually set what programs use what GPU. It is really bothersome and irritating, but doable…

But things get worse, the Optimus driver only works on Windows (Linux, and Hackintosh it just cause lots of bugs), and if you want to remove it, seemly you have to reinstall Windows entirely (I saw this on Tom’s Hardware Guide, I found no other way of removing it on Windows).

And finally, my issue of yesterday, the great motivation for this post: I am greatly upset with this ASUS laptop in general, one of the USB ports don’t work, it shuts itself down randomly, suspends randomly, and it was this way since I bought it, and from what I saw on internet, it do that with other people too, but what REALLY annoys me is that its screen looks very funky, I never found a proper balance that is pleasing to the eye, and as many laptops, it has no contrast control except using the driver controls.

But remember we had TWO VGA chips? Well, nVidia Optimus breaks that too, as the chips switch, the settings you saved about gamma, contrast, colour temperature and whatnot change, not only that, some people report the settings are completely lost, sometimes just after closing the calibration app.

I thought that maybe, updating my video drivers, I could see a improvement on that, ASUS site has no new driver, but nVidia site had a new laptop specific driver, so I went to their site, got  a new driver and installed.

Result: nVidia GPU now is disabled, it does not even launch nVidia control panel anymore, nor any software that uses D3D or OpenGL, also Windows ignored the driver update on creating a restore point (it has restore points for every app I installed before the driver, and one just after I installed the driver, labeled DirectX install…), so I am downloading a old driver from ASUS site, and HOPING that it will fix my machine…

Next time I buy a laptop (this was my first laptop), I will avoid Optimus like the plague, because it IS the plague, this is the time that I wish I was rich enough to buy a MacBook, it would have no issues with Optimus, Windows 8 (this laptop came with Windows 8, and the UEFI is buggy too and don’t let me disable Secure Boot to install a OS of my preference… I only bought this laptop, because NOONE in Brazil was selling empty laptops without charging a premium :( ), non-working USB, random shutdowns… And I say that as someone that dislikes Apple.

Piet Mondrian, De Stijl and Pondrian

I’ve made a game named Pondrian, the game is fairly simple, just a pong clone. This post will explain about Piet Mondrian, De Stijl, and decisions in the creation of Pondrian.

De Stijl

From 1917 and up to 1931 a group of Dutch people and some friends, made a body of art and design work that were named “De Stijl”, that is the Dutch phrase for “The Style”, it was work that is considered to be part of a larger art movement of “neo-plasticism”, it involved paintings, architecture, furniture design, poetry, music, and many other forms of art and design (note: I believe art and design to not have much difference between them, but this is a discussion for another day).

It started among many other reasons, because of several new art styles that appeared in early 1900s, like Cubism and Impressionism, and also because the Dutch decided to be neutral in the first world war (that started in 1914), thus they were stuck inside their own country, and needed to develop their own art without much more external influences.

A artist named Theo van Doesburg started around that time meeting artists, and after a sufficient amount of them was met, Theo started a magazine named “De Stijl”, that advocated a new form of art in the Netherlands, and defended an art form based on simplicity and basic elements of each art, in the most developed case, the visual arts, this was using only orthogonal lines (ie: vertical and horizontal, sometimes in other directions too, but never arcs, or non-straight lines) and primary colours, at least what they believed then to be the primary colours (black, white, blue, red, yellow).

Theo’s journal featured several art manifestos, signed by all formal members of the “De Stijl” group, and many interesting essays, Piet Mondrian a Dutch-born artist, but that was living in France but got stuck in the Netherlands because of the war (he was only visiting when the war broke out, and thus could not return to Paris), wrote several essays about Neo-Plasticism, and in fact invented the term, his essays describe very precisely what De Stijl artists created in visual arts, his essays mostly ignored other forms of art, but he made a friendship with a musician for example, and still ended influencing other forms of art.

Among the most iconic De Stijl works, demonstrating very clearly the use of only basic elements of an art form were paintings by Theo and Mondrian, a house built by Gerrit Rietvelt, and a chair designed by Rietvelt too.

Theo van Doesburg’s Composition VII

 

Piet Mondrians’s Composition with Yellow, Blue and Red

 

Gerrit Rietveld’s house for Mrs. Truus Schröder-Schräder

 

Gerrit Rietveld’s Red and Blue Chair

 

After the first world war ended, external influences came again to influence the group, among them Bauhaus (a german architecture and design school), Russian Construtivism, and even a bit of Dadaism. Theo in particular started to defend that diagonal lines were more pure and basic than vertical and horizontal lines, this made Piet Mondrian leave the group formally in 1924, but he remained making his art still alongside what we can consider De Stijl, also many other characters joined and left.

Theo died 1931, and with him went away the magazine, and a formal group, some artists remained working with what they developed on the time, including Mondrian and Rietveld, some other artists abandoned the style completely, and returned to their classical art roots, making again photo-like paintings for example.

Piet Mondrian

Born as Pieter Cornelis Mondriaan in 1872, and renamed himself to Mondrian in 1906, Piet (a nickname, never his legal name), learned visual arts in early childhood, his father was a school teacher and also a drawing teacher, and taught Piet very well in the visual arts. His earliest art was very normal, with photorealistic paintings of varied scenes, it also showed he was a skilled painter, later he started experimenting, he made in 1908 a painting named “Avond”, it was just a tree, using realistic shape, but primary colours only. In 1911 he moved to Paris and produced a painting named “Gray Tree”, where he tried to draw a tree without colours, and in cubist style, result of living in Paris (where for example we also had Picasso). 1913 we can see a step further, the painting “Composition II” looks like a sort of tree, but made only of vertical lines and some shades of primary colours. Then we have in 1918, already after meeting Theo and moving back to Netherlands a work that is more obvious in De Stijl style, the painting Composition with Colour Planes and Gray Lines 1. The work that I decided to reference was his most famous works after 1920, for example the 1925 Lozenge Composition with Red, Black, Blue, and Yellow.

Mondrian's Avond (1908)

Mondrian’s Avond (1908)

Mondrian's Gray Tree (1911)

Mondrian’s Gray Tree (1911)

Piet Mondrian's Composition No. II; Composition in Line and Colour (1913)

Piet Mondrian’s Composition No. II; Composition in Line and Colour (1913)

Piet Mondrian's Composition with Color Planes and Gray Lines 1 (1918)

Piet Mondrian’s Composition with Color Planes and Gray Lines 1 (1918)

Piet Mondrian's Lozenge Composition with Red, Black, Blue, and Yellow (1925)

Piet Mondrian’s Lozenge Composition with Red, Black, Blue, and Yellow (1925)

Pondrian

I started making games at a very young age, when I was still 6 years old my dad started to teach me how to play with DOS Batch files, and when I was eight he gave me a book in MSX Basic programming, I did not had a MSX, I had a 286 with GWBASIC, but I spent many afternoons developing my own little games (the coolest one was a side scrolling helicopter simulator, sadly it was also very buggy, some MSX terrain drawing routines behaved very badly on the IBM PC).

My first modern game programming API, without being a mod was Allegro 4, the problem was that back then I feel in love with developing engines instead of games, and made many many many half-complete engines that I never used (nor anyone else). When I decided to learn XNA, I also decided that the most important was NOT make a engine, but instead make a full game, a simple one, for this I had chosen Pong.

Paddle Wars 3000 menu

The result of that was Paddle Wars 3000, a pong game with powerups, Paddle Wars 3000 spawned its own series later (Paddle Wars), but the idea to do pong games to test a API remained. A friend of mine then suggested me to use Piet Mondrian paintings as inspiration, he thought that a moving painting as game would be funny, thus the idea of Pondrian was born.

The first change to implement Pondrian was when testing Sparrow SDK for iOS, it was made in four hours in a more or less idle day, I won’t explain more about these past Pondrian versions because there is more information here on this site.

The current version was made to learn Allegro 5 while using C (not C++).

The first decision was the direction of play, despite most pong games using a horizontal field, I decided to use a square field, with vertical play, the reason for that it is to look closer to a painting, without breaking the gameplay too much, also it is easier to control things with the mouse in the horizontal than in the vertical.

After this, I had to decide how to make the play more dynamic, just reflecting the ball around is boring, very boring, and might end in situations where no player can win, and that you can just sit doing the same movements, or no movement at all, in past games I used a simulated “curved” paddle, where the farther the ball is from the center, the more its angle of reflection change, but this has two issues, first, the paddle is clearly a rectangle, and not curved at all, second, it makes the game too hard for many players, many people cannot grasp the paddle curvature, and much less move to the position where they need to, all my past paddle games featured paddle curvature and paddle friction (ie: you could “kick” the ball by moving the paddle while the ball was in contact with it), and clearly few players could play them effectively, this time I decided instead to make the ball randomly speed up, this mean the only skill you need is actually tracking the ball, and not allowing it to cross into your end line.

Another gameplay decision to make was how to handle AI difficulty, so I decided to give the AI a maximum speed to move toward the ball, and then increase this as the player win, and decrease if the player starts losing too much.

Then I had to make some feedback decisions, first, I did not wanted to make a visible score text, to avoid ruining the painting visuals, so I made the score control sound pitch, when the ball hits a paddle, or a score is made, a appropriate sound plays, but with pitch depending on the score. Also I made the ball size “wooble” depending on what it hit, and a extra very visible wooble when it gets sped up or a game starts.

The game has no music because I did not found a public domain recording of appropriate music, I considered for example using one of the tracks of Proeven van Stijlkunst (youtube recording), this was a collection of piano pieces made by Jakob van Domselaer, a close friend of Mondrian while in the Netherlands, and a musician that tried to apply De Stijl principles to music.

Mechwarrior Online and how to NOT do a Free to Play Game

I spent some time playing Mechwarrior Online, but I had to  stop.

The game is amazing, it is fun, controlling the hulking mechs around is fun, they are heavy, and slow, and pack a serious punch, and your screen shakes and sparks fly when missiles hit you head on, and you hear alarms and bits of metal flying and you get desperate when someone managed to sneak on you and is shooting lasers at your back. Yet… I had to stop, it managed to become boring.

In Free to Play there is a important metric to the designers, that they call “churn rate”, it measures the amount of players that are stopping to play after some arbitrary time. And if you do your free to play wrong, you end with low conversions in first place, or with high churn rate, and that is the case with Mechwarrior Online.

Mechwarrior Online fans are quick to point that it is not “pay to win” and that the best machines you must earn playing normally, despite this being debatable (the real money consumables are better than the game money ones for example), that is NOT the issue that we have here, instead the issue is that the game managed to get VERY boring, and the devs and old players and testers don’t realize, so the end result is a hardcore fan following of old players, that sometimes keep expending some money, but the game will never move beyond that, new players get bored and quit, it has to do with some balance changes made after the most heavy period of testing.

The draw to most mech games, specially a battletech game, is that you can make your own custom awesome mech and kick ass (or get ass kicked if you are a poor designer), Mechwarrior Online promises that, but don’t deliver, the first 25 matches a player have give you much more money than normally, enough to buy your first mech chassis and some parts, but after that the game become grindy, very very very grindy, I for example bought with my 25 match money a Catapult with only missile slots, after a while playing with the default, I got bored and made a sniper catapult, filled with long range missiles, this also got boring for a while, then I started to switch it to what players call “splatcat” that is fitting it with absurds amounts of short range missiles and then running up to other mechs face and blowing them up (and most likely yourself with them sometimes), the problem is, several parts you cannot keep, like the “Artemis” system that I paid lots of money to install for the sniper mode, I had to remove it again, and to remove it again it was very expensive, and I had to buy several parts, and eventually the parts I needed were so expensive that it would take 1 entire month playing to buy each extra part needed.

Then you also have XP, that serves to buy skills for your mech and pilot, the mech XP is fairly fast to earn, the problem is that you cannot expend it completely until you do it with 3 different variants of the same mech, the problem is that each chassis would cost me about 4 months of playing the game, there is no way I will play a gestation long game just to upgrade a little bit my mech, and then you have the XP for pilot skills, this one is so slow to go up that I never got the first pilot skill.

And if you get bored with your mech, you cannot buy another easily, it is also very expensive. The issue with the game is not that it is not fun to play, it is, the issue of the game is that the major part of it, the customization, is SO SLOW and grindy, that a player unwilling to spend money will end getting bored before he ever gets his second mech, or finishes building his first one, and quits. Of course, the intention of the game IS make it boring and force the person to expend money, the problem is that mechs are also ridicously expensive, the price for some mechs can easily allow you to buy some full games out there, why the hell I would expend on a bunch of mechs in a online game a amount of money that I can buy the rest of battletech series and have fun with them instead?

If you make a free to play game, don’t make it grindy and expensive, that is only good to scare new players away, and if you keep scaring players away, the game will get boring for everyone, what fun the old paid players will have, if they never see other people? What fun there is when you have your awesome expensive mech that you paid real money for it, when there is noone to shoot?

Don’t do that, if you make a free to play game, remember that people have lives, the ones most likely to pay for your games are not teenager people with loads of free time, the ones most likely to pay for your games are people with jobs, and those won’t pay for your game if it bores them!

Lua error handling techniques

Lua is a powerful language, but with some flaws, among them, the fact that you can easily type something wrong, specially use lowercase or uppercase in the wrong places by accident, and it will silently introduce errors in your code.

Coroutines make it even more silent, when there is a error in a coroutine, the result is usually the coroutine just ending and leaving no error message behind. But all of this, has solutions.

First, many functions in Lua return error values, and a result value, most important in that is coroutine,resume(), that function returns a boolean value with the success (or not) and a error message if a error happened, using Lua feature of multiple return values.

Going along with that, we have the function “assert()” that stops execution if needed, and prints a error message, assert takes two arguments, one is a check, if it is false, the assert function stops execution, and prints the second argument to whatever place errors are being printed. Assert arguments and coroutine.resume() return values exactly allow you to do assert(coroutine.resume(coroutinename)).

Another interesting function is “pcall”, that means protected call, its first argument is a function handle, and the other arguments are the paremeters, a example would be ‘pcall(print, “hello world!”)’, and what this function do is ensure the program will continue running if whatever function you used on the first argument fails for whatever reason, if you DO want the program to spot and print a error message, pcall can also be used inside a assert, exactly like coroutine.resume.

What if you need to generate a error instead? One that will be caught by pcall or coroutine.resume? (or by your interpreter top level), for this we can use a function aptly named “error”, what it do is generate a error.

And a final issue, is that sometimes you want return values beside the failure, coroutine.resume in particular returns as second value a error message only upon failure of the code, but when it works, it will return whatever you wanted the coroutine it is resuming to return. Because of this, assert() and pcall() can pass along return values, since their first value will still be true or false for execution, you can use select to choose the other values, for example by doing this:

 

Why you should avoid magic numbers

Sometimes when you are learning to code, people tell you to avoid magic numbers, that is, numbers that you write in the middle of your source files without any clear explanation, or even with explanation in comments.

But I tell you, even with explanation in comments, you should avoid it, and use constants, even in language that don’t have constants. For example most of my current coding is done using Lua, that has no concept of constants, yet sometimes in my files there are a couple of constants neatly grouped somewhere, with some special naming convention (FOR_EXAMPLE_ALL_CAPS_WITH_UNDERLINE).

Even farther than having constants, you can make functions that return values that you want. But why someone should have all this hassle? It has to do with maintenance of the code.

A priority when writing code, beside making it work (always remember this, user experience is ALWAYS first, even when you are doing something very very very backend, if your code somehow results in bad user experience, you are doing it wrong) is ease of maintenance, inexperienced programmers are usually tempted to make crazy tricks to show off, or attempt to make your code super blazing hyper amazingly fast, or create some really awesome architecture with thousands of layers of abstraction and so on.

Then someone comes here and say: “I don’t need maintenance! I am coding alone and I will remember where all my variables are and their meaning!” Then I ask, what if, you have to chance something that you hardcoded everywhere?

I made that mistake once, in my early programming days, I made a game that had a score input screen designed for devices with no keyboards (the game was for computers, but I designed it imagining a target device that would be a very simple phone, this was back before the invention of touch phones, my phone then was still black and white), so I made a grid of letters, and you could choose what letter to input by going to that letter with the arrow keys, navigating the grid until the chosen letter. In my naive ways, I hardcoded the position of everything, lots and lots and lots of numbers describing the layout of the screen and to what places the current position marker should go. And then… I had to change the game resolution.

That day, when I had to change resolution, opened my source and saw hundreds of magic numbers, that I learned how painful using magic numbers in first place can be. If I had used math and made a layout that could react to resolution on its own, it would be no need, but no, I made that program that needed magic numbers, I had two choices: Replace them all, or… delete the stupidity I’ve made, and do it again. I will let you figure on your own what was my choice, but this I can tell you, don’t use magic numbers, even if you are coding alone in your own project, because if you have to change them all, or if you forget anything about them, it WILL bite you back.

Screen and asset image scaling for games

GI Joe for NESOne common problem in the game industry, is how to handle image resolutions on multiple types of screen. Here we will discuss the sort of scaling types that exists, scaling techniques, some details on how they work, and finally, the solution that I have chosen to handle scaling problems on Kidoteca mobile games.

Upscaling

The first problems related to that, were the ones about how to play older games on newer screens, thus requiring you to scale upwards. The most obvious solution is just measure the nearest pixel in some arbitrary direction and replicate it.

GI Joe nearest neighbor

As can be seen in the previous image (scaled 2x), this has a obvious problem, the result is just a blocky image, not necessarily clearer, and certainly not prettier. And this technique has a less obvious problem, it only work for integer scaling, you can only make the images two, three, four times bigger and so on, in some cases this is enough, old console games, and early computer games tended to use as resolution something close to 320×240, you can make this two times bigger to 640×480 and still fit a 480p TV screen, or a 720 pixel tall laptop screen. But sometimes you have a game that uses 55% of the screen size, and if you double it, it gets bigger than the screen.

The most easy way to scale by a non-integer ratio is to use a technique similar to the last one, but mixing it with filtering.

Filters and Interpolation

Filters as some people like to call the scaling techniques, are algorithms and mathematical formulas to calculate the value of a certain pixel based on the information that we can capture from its neighbours. It is easier to understand this, when you understand that pixels are NOT squares[1], pixels are just points, with a colour, a image on a computer is just a matrix of points with no size, the fact that modern screens displays them as squares or rectangles do not matter when doing the necessary maths (and by the way: some TV sets used to display pixels triangles composed of circles, one for each colour…). The most simple filters, are just mathematical formulas, without a algorithm, those actually are interpolation functions, getting some values, and interpolating between them.

Bilinear Interpolation

Bilinear interpolation is the most common of the simple techniques to upscale a image by a non-integer value, it can also be used to downscale a image. Theoretically it is simple, it get the values of the nearest four pixels, and use linear interpolation in 2D, linear interpolation is for example when you want to find a value in a certain position between two other values.

A example of linear interpolation is when you have a graph full of points, if you draw straight lines between each point and the next, the result is the same if you used linear interpolation several times to find all other points.

But we can have a even simpler example, suppose you are planning a car trip, and you want to know the temperature in your destination, but when you go looking for it, you discover only that on marker 30 of the road, the temperature is -20 Celsius degrees, and on marker 2000 of the road, the temperature is 30 Celsius degrees, and your destination is at market 800, and you have no other information of temperature.

The solution is use linear interpolation, you first figure at what percent position of the road is your destination, thus you divide the distance from the first marker to your destination by the size of the road between the markers, the formula becomes “(target – start) / (end – start)” thus for us this is 800 – 30 / 2000 – 30, that is 770 / 1970, thus our target is 39% along the road. Then you need to know how much is the temperature range between the markers, and how much 39% of the temperature is, our formula then is “(final – start) * last formula result” thus it is (30 -(-20)) * 0.39 that is 19.5.

You can combine the formulas for linear interpolation, they become y = y0 + (y1 – y0) * ((x – x0) / (x1 – x0)) .

Bilinear interpolation, is when you do this twice, one time for each direction (vertical and horizontal in our case), and then multiply the results, getting a quadratic result. Or in simple terms, each resulting pixel is the weighted average of the nearest four pixels of the original image.

gijoe_nes_bilinear

The image above is the result of using bilinear interpolation, notice that it has some weird defects now, looking a bit fuzzy, blurry and a bit aliased, it is clear that although bilinear interpolation is really a great start to do non-integer scaling, the result is still lacking, or to be frank, ugly. It has one advantage though: it can deliver a decent quality when you are placing textures on a 3D object, and is very fast to calculate relative to other methods of doing that (obviously, doing no interpolation, that is, using something akin to the already shown nearest neighbour, is faster, but the result is so ugly and jarring that is pointless to attempt that).

Bicubic interpolation

Bicubic interpolation is the next best step compared to the previous one, it is also used to paste 2D textures in 3D images. It is a similar principle, you take two cubic interpolations, vertical and horizontal and use them. It also requires more data than 4 pixels, it is because each direction needs more than a start and finish point, if this was a graph, the result would be instead of straight lines toward each point, it would be some smooth lines, that take in account more points, so if we have for example in the graph a portion of it where there are 3 points a bit not aligned, instead of seeing two straight lines with a oblique angle, you would see a very smooth arc.

gijoe_nes_bicubic

The image above shows bicubic interpolation in action, yes it is smoother than bilinear, but it has a problem of halos near edges (take a look at the point where black and blue background meets, it has now a light blue line, or the life bars that now look a bit glowy), in a way it might be desired, increasing the perception of sharpness of image compared to a bilinear interpolated image, but sometimes the effect is very blaring and ugly, this is result of the smoothing between several points mentioned previously, sometimes the effect “overshoots”, and generate some sort of waves in the image.

Algorithmic filters

The solution beyond using interpolation (there are several others we don’t covered, we will cover them on downscaling, where they are more important) is to use filters with more complex algorithms, that instead of just taking samples and doing math on them, they make decisions based on the available information.

Eagle

One early filter of that was the Eagle scaling filter, it was made only to double a image size, it works by taking a pixel, and making it become four pixels, then it checks in each new pixel if the pixel to the left, and the pixel to the top, have the same colour, if that is true, that pixel became the same colour as the pixels checked, after doing that for each of the four pixels, it go to the next pixel and it becomes four again, and this repeats until all pixels are doubled.

This algorithm was interesting because it made images that look very different from the interpolated images, sometimes making them much nicer, several other algorithms were made then based on Eagle idea, like 2xSal, Super Eagle, Eagle 3x and several others.

HQx

The HQx family of filters were inspired by the Eagle filter too, they work by trying to detect lines on the original image, it is unfortunately too complex to explain on this single post, but it has a official site[2].

gijoe_nes_hq2x

HQx (HQ2x on this case) is clearly really great in making simple shapes become clearer, the text is much better now, also the arrow near DUKE looks very good, the problem with it is that the complex parts of the screen, like the foliage or the moss on the ground look all blotchy, weird and confusing. Also not shown on this picture, but HQx tend to make curved objects become a series of straight lines, that look much less curved or round.

xBR

Seemly inspired by the HQx filters, someone made a even better filter, it works by having multiple levels of filtering, it first do something similar to HQx, and then it scans the image again and try to improve. The result is rather interesting, also it has a very detailed explanation[3].

gijoe_nes_2xbr

The image above was made with a older version of xBR, because the newer versions only do 4x (that are too big for the blog post), you can see clearly that it detect round shapes much better than HQ2X, with the background now looking more organic, also the ground is a bit less messy, the problem that it has now is that letters also became too rounded, and the ammo icon also became quite distorted, but overall it is also very interesting.

Downscaling

For newer games the problem is not upscaling, it is downscaling instead, specially when considering mobile games, or very high resolution games on TV or Computer. A extreme example would be make a game that uses the maximum resolution of a new TV, 7680 × 4320 pixels, and then you also want to make it possible to play on the laptop I am using now, 1366 x 768, this mean making images for the TV, and then figuring a way to display them downscaled to 17% of their original size.

A more practical example would be the issue I have when working on mobile games, displaying things on the newest iPad, 2048 x 1536 but also allowing them to work on old iPhones 480 x 320, the usual solution in the industry is ship multiple sets of images with your applications, and use the most appropriate ones (and when they are not perfect, scale them using some interpolation in real-time), on Kidoteca we create the images in the iPad size, and then make versions with half, and one quarter the resolution, as naming convention we act like if the smallest image was the original, and add @somenumber to the bigger ones, a image called “test.png” thus will have a version named “[email protected]” and “[email protected]” that have double, and quadruple size respectively.

1button@4

Originally we created the smaller images using whatever filtering came on the image editing software, thus they were usually bilinear or bicubic interpolated, as we saw previously, those two mathematical interpolation functions are really simple, and although they result into acceptable images, it is not great images.

Gaussian filter

The Gaussian filter is actually a filter, it can be used for interpolation, but it can also be used in non-scaled image, just filtering it, it works by applying a mathematical formula on a group of pixels. The formula is f(x) = a * e * -((x – b)² / 2 * c² ) + d where e is Euler Number[4], its shape in a graph is a bell curve, the filter effect on a image you can imagine as a 3D bell over each pixel, with the height of the bell meaning how much the pixel below that part affects the pixel in the center.

1buttongaussian

As you can see in the image above, the result of a gaussian filter is blurry, in fact when you DO want to blur a image on purpose, gaussian filter is really great for that, with careful tuning

Quadratic filter

The quadratic filter works similarly to the gaussian filter, but with another equation.

1buttonquad

The result is even more blurry than the gaussian filter.

Sinc filter

The “sinc” filter name come from the mathematical “sinc” function, that actually means “Cardinal Sine”, the Cardinal Sine formula is “sinc(x) = sin(x) / x”, but for signal processing (like scaling images or meddling with audio) another formula is used, its correct name is normalized cardinal sine, but most people call it “sinc” anyway, that formula is “sinc(x) = sin(π * x)/(π * x)”, its shape looks like a sine wave, but with each peak near the center being taller, and the distance between valleys near the center being slightly bigger.

1buttonsincandring

The sinc filter has two problems, first, it easily cause severe ringing artifacts on the image, as I mentioned earlier its shape is a sine wave, and thus on a image it can make the entire image “wavy”, like when you throw a stone in a pond. The second problem is that although all previous math functions have a limited scope of pixels, ranging from 4 pixels in bilinear interpolation, to a arbitrary but limited number in Gaussian filter, the sinc filter can take the whole image as input, that is because unlike the Gaussian filter bell shape, that eventually gets so close to zero that you can safely ignore the effects of that pixel, the sinc filter waves take a very long distance from the center to be near zero, because of this, the sinc filter is actually never used in its pure form, instead most people use a “window” version, where the choose a window, or a part of the image around each pixel to calculate the effects of the filter.

Lanczos filter

The Lanczos filter is a variation of the sinc filter, it uses the same sinc function, but also decide with math the window of the filter (and thus the contribution of each pixel to the sinc filter).

1buttonlanczos

Lanczos for downscaling actually is great, for upscaling (not shown here) not so much, its biggest issue is that although it has less ringing than the sinc filter, it can still create ringing, the advantage is that on images that are supposed to look sharp, they end looking sharp, contrasting to bilinear, bicubic or gaussian resampling techniques, that make downscaled images blurry instead. Also Lanczos is slow, not really suitable for anything realtime.

Solution for Mobile Software

As it was mentioned earlier, mobile software frequently ships with several sets of images, and the technique that was being used at Kidoteca was to just create the largest image needed, and then scale them down with interpolation, it was good enough, but since there is a incredible amount of screen shapes and sizes, further interpolation was needed on most screens, thus making the image quality suffer quite a bit. I downloaded a program for command line image processing named ImageMagick[5], and then I fooled around downscaling images until I could find the best results, and Lanczos won, for our art style, and technique (starting big and downsizing) the result is really great, with the smaller images looking very sharp.

References

1. A Pixel is NOT a Little Square (PDF)
2. HQx official site
3. xBR thread with detailed explanation and updates
4. Euler’s Number on Wikipedia
5. ImageMagick official site

Supplemental links

Bicubic interpolation detailed expalantion
ImageMagick resambling filters manual
xBR vs HQx Interpolation Filter Comparison

Simple Object Orientation in Lua

This post is a repost from the AGF Games blog.

 

Lua is not object oriented… But some people like it, and it has some advantages…

I decided to use a simple form of it on my project, it does not support multiple inheritance and many other features, it was made to be very, very simple, only to organize the project and advance some form of code reuse.

So, what features I tried to support?

First, very ease of use to create a class
Second, ease to create a instance.
Third, automatic constructor if you do not want to make one.
Fourth, easy “operator overloading” support.
Fifth, static variables (I like them).

Some people will tell me now that there are lots of already done implementations of this, that you only need to search the web… So why make a new one? Well, almost all of the ones I found were too complex for my taste, and mostly for Lua 5.0. Lua 5.1 has a new particular feature, that I really wanted to use to make my class system, a powerful feature.

What we need to use from Lua? Mostly metatables, and the metamethods __index and __call, the second one being available only in Lua 5.1 and onward.

So, how we do it? First, we need to create a new chunk (I suggest you do it creating a file… I will make later a tutorial on how to make a non-file chunk).

My file is named kidclass.lua (the “kid” comes from Kidoteca, the company where I am working now to make children games).

This file will follow my usual style to make “modules”, without using the module() function (that got removed in Lua 5.2 and rightfully so… module() was just plain stupid).

Important, remember to make kidclass (or whatever name you want) local, so you do not end polluting the global namespace (ie: _G) and also avoid some accidental name conflicts…

How you use files made in that style?

Note that doing just require without assigning its return value to something, would be useless, because back when we created the module, we made it local.

Now, how we achieve the first objective, the ease of use to create a class?

The most easy way would just do: myclass = {};
But that would prevent us from achieving the other features, so we need a function, to the sort that creating a class would be “myclass = kidclass.new();”
So we know a kidclass.new, that would at most basic return a {} (this means a empty table), the following could would be the result:

And how we make the second object work now?

I thought that the nicest way to create a instance, would follow the style of C++ (more or less) so “instance = Object()” except Lua does not have a way to create a constructor that is named like a Object… Oh, actually it does have something like that, since 5.1! This is the reason why our code will be different from what you usually see on internet. We will use the might __call.

So, what __call do? __call is a metamethod (a method from a metatable) that is called when you attempt to call a table (ie: you do “table = {}; table()”).

__call is a metamethod, this means that we need to use metatable… I thus define the metatable that contains call on our “new” function we already created.

Since kidclass is what we defined as metatable for class, we need to create __call on kidclass:

Ok, so what we did for now? We made a way to create a class using a simple method.

Thus doing “myClass = kidclass.new()” will create a empty table that has kidclass as metatable.

And we created a way to create a instance.

Thus doing “myInstance = myClass()” will make Lua check if myClass has a metatable (it has… it is kidclass), and then it will check in the metatable (kidclass) if it has a __call. And then it will run the function contained in __call (this is why __call is defined as “= function()”)

But what if we want for example to create a Point class, that supports Point(x, y) as constructor? This mean now we need to create a support for constructor…

We change __call to this:

Alright, now a constructor can be created for our class, following the following format:

This actually already works as very simple OOP, so if you want, you can stop here… But I still want to support operator overloading!

Operator overloading is done in Lua using metamethods too, they are __add for +, __mul for * and so on (look on the Lua manual for all of them, there are also overloading for = and >= and more)

This means we need to add a metatable for the instances, that will look for __add function for example. Since we are making something OOP, the most logical place for __add is the class…

Thus when we do instanceOfBallC = instanceOfBallA + instanceOfBallB; Lua will look for __add in Ball.

We modify again our __call to that:

Static variables we already support, just add them to class (ie: myClass.staticVar = 23) but I think we can add one last cool feature… We can make our instances read from their class any variable that does not exist, thus creating default variables that are also static variables… We do that in lua by setting the __index of the table that is missing a variable… since __index is a metamethod too (we can set it to a table, as a shorthand version of sorts) we need to put it in the metatable.

Thus just before setmetatable(instance, class); line we add class.__index = class, meaning now that any time someone try to read something that does not exist from instance (that has class as metatable) it will read from what the metatable __index pointed (class itself too), thus in practice instances actually inherit from our class.

This is the final version of the code (PLEASE do not just copy paste it… it is here for studying, I am placing a commercial private code here in a goodwill gesture to educate you, not to you just leech it without understanding it).