For a while I wanted to contemplate on comparative analysis about how programming advanced throughout decades I witnessed. Nostalgia often draws me to old days when it all began, and I try to understand to which degree it all changed. What were technologies then, in given possibilities, compare them to today ones and whether programmers were better then, how todays IT industry compares at all and have we all just become users of consumer software suite Visual Studio, which doesn’t anymore guarantee that one can use Word as well.

I made my first steps in programming on Commodore 64. Actually, you couldn’t avoid “programming” if you wanted to execute any command or start a game. After starting computer, BASIC interpreter would pop up, waiting for your input on line 10.
To be a programmer in those days, requirements were to know commands of BASIC programming language, which hand came as instruction booklet with computer itself, type in your commands and issue RUN command to execute program:

10 PRINT „HELLO WORLD“
20 RUN
HELLO WORLD
READY.

Of course, interesting stuff on Commodore weren’t programming, but games, and using BASIC one could hardly make any game. BASIC was high-level language that didn’t allow huge freedom to exploit whole potential of hardware platform for given purpose – meaning – to make a game. To achieve that, one had to write code whispering directly to (C)PU’s ear, and write “machine” code – direct processor instructions.

As even most hard core programmers aren’t complete masochists, all new computers relatively quickly got their assemblers. Assemblers were macro languages to write machine code, where biggest advantage was manipulation of data, so you didn’t have to code text or image byte by byte, but you could use them as pre-made arrays and reference it by label.

Also, computers themselves worked a bit differently then. Namely, all of those older computes had factory predefined memory map where’s what. So, if you wanted, for example, to put data on screen, you would hit fixed memory address ($0400, 0xA000, etc), to play sound another, to read keyboard input another, and so on, while you would place your code anywhere in between.

So, one PRINT “HELLO WORLD”, became

LDX #$000
LOOP LDA MSG, X
STA $0400, X
INX
CPX #$0B
BNE LOOP
RTS
MSG .TEXT "HELLO WORLD"

… and programmers divided into those who write games in machine code and users of consumer application BASIC, that everyone could know if they read instruction manual and understood it.
I was, of course, part of that consumer group then, and only years after I read that BASIC manual to the end, started to understand that division and what advantages low level code brings to final product.
And this trend continued for a long time.
My next computer, Amiga, had all that C64 had, but more instructions, registers, richer BASIC, and more memory locations of interest, as well as more memory space for programs themselves. Finally came PC as home computer, but almost nothing changed. I say “almost”, because hardware changed in some way going in direction of multitasking, so big role took interrupts and unattended memory buffers, although the concept of coding those new things remained the same.
Sill, very little applications were made in BASIC, while more serious applications, including operating system and games, were made using assemblers, or other high-level language that would inherit and replace BASIC.
Actually, it wasn’t the question of language, but approach to making software. Development suits started to come with ready-made libraries optimized on machine code level, that would compile with user’s code and ling into final product. Such application would execute on its own, without interpreters and be number of times faster and completely independent.
Another advantage was that commercial application would not be distributed as source code anymore, but as byte array of processor instructions, which are unreadable to human eye without some disassembler, and nobody could anymore just replace name and copyright in your code and sell software as their own.
Most widely accepted programming language of that time that was using that principle was C. In the beginning, it was more of a supper-macro language to assembler, and everything was fitting nicely.

printf(“Hello World”);

Simple enough for coding, it only required compiling and linking before having final stand-alone executable application.

According to me, this was the most golden age of programming. Common tasks could have been executed with highest efficiency, with ability to go even lower and complete control of hardware. From this period my favorite revelation are tied to optimization logics, such as replacing:

int x = y / 2;

to:

int x = y >> 1;

or:

asm {
	mov ax, [x]
	shr ax,1
}
...

 

Naturally, technology advanced since then, and mostly because geeks who did all above out of enthusiasm, curiosity, fun or proving, started to make applications that are mature and reliable enough to affect serious improvements in all aspects of society.
Over the night, irradiated kids, knowing no trade, that are playing games all day long and have -20 eyesight from staring at “TV”, became hugely wanted workforce and started earning unimaginable salaries, looking from perspective of those who had already seen them as lost generation.
Only few years later, that first wave enabled IT industry revolutionaries to say – we reached the top of the chain, and erect IT flag on top of Mount Everest somewhere in second half of ‘90s.
About then, commences the Second Age or IT, at least from the perspective of programmers

About that, read in “part two”…

 

Vladimir Stojanović

Senior Developer