Archive for the 'Programming' category
Dissecting DOSBox
December 20, 2010If you’re a gamer and have been for years, then you’ve probably heard of and quite possibly used DOSBox. If you haven’t, then let me introduce it to you. DOSBox is great little program for running all of your favorite classic games. Games which were originally built for monitors and video cards which since been retired, and legacy audio systems like SoundBlaster 16, Audio Galaxy, or Gravis Ultrasound. Specifically, it supports games and programs which were written for the MS-DOS or compatible operating system. Although, the software specializes in supporting games, you may have success in running other programs. Although it doesn’t make any guarantees regarding these legacy applications, which can require different features provided by unsupported drivers, I have had success in running complex software like the DJGPP compiler but run into a bit of trouble when running an older version of the TDE (Thomson-Davis Editor).
I’ll wait here while you go and download your copy of the source code…
Now that you’ve downloaded DOSBox source and presumably unpacked it, you are ready to get your hands dirty. This isn’t going to be an article about how to use DOSBox, but rather, how does it work under the hood, exactly? What are the major software gears and wheels used for handling such programs? Why don’t all of your favourite games have 100% compatibility?
The first thing to understand about DOSBox is that it’s an emulator for x86 CPU instructions, floating point unit instructions, and various functions within MS-DOS compatible operating systems. Specifically, it emulates functions around Interrupt 21H (hexadecimal) and a couple around 20H, 25H, 26H, and 27H. It also installs an NULL handler for interrupts 28H and 29H which do nothing. But before we get ahead of ourselves, let’s take a step back and look at the various modules that make up the program.
CPU Emulation. At the heart of any emulation program is the CPU emulation core. Every program (.EXE or .COM file) on your DOS powered computer contains machine code and data. When using DOSBox, it’s the CPU emulator’s job to process that machine code; therefore, each program that is teased apart and executed by DOSBox arrives at this sacred chunk of memory sooner or later.
DOSBox can be configured to emulate the x86 core in a few different ways. Therefore, when we talk about emulation for CPU cores, we’re either talking about emulating each instruction found in the program one at a time, or the emulator will choose to batch process these instructions and operands and translate them to native instructions for direct execution on the host CPU (they call this mode ‘dynamic’). This direct execution mode can provide into better performance on some machines, but may be slower on others so the original emulation mode is still available.
Emulation can be a little confusing if you’ve never heard of it before, so let’s go over it once again. An executable program contains a bunch or binary data representing machine operation codes (or op codes), operands (arguments to that opcode), and data. This information can be fed to your CPU by your operating system, or they can be read by another program, just like any other file, and interpreted one byte at a time. In the last case, it would be the emulation program which decides how to implement the functionality of the CMPSB instruction (used for comparing bytes in memory), for example, rather than the CPU providing the hardware implementation. It’s this act of interpretation that differentiates emulation from direct execution on the host processor.
FPU emulation. In the world of electronic gaming, the software powering those fantastic explosions, shattering those fragile glass windows, and hurling those flying projectiles often need to do a series of calculations to determine values for acceleration, direction, and manipulating various bits of trigonometry. These calculations can involve irrational numbers like PI (3.141592654…), which I’m sure you all remember from school. In programming terms, these numbers are often stored in variables which follow a standard method of encoding the information about the significand and the exponent (along with the sign of the number); one such standard is IEEE 754. Let’s not get too mired in the intricacies of how floating points are stored, which is quite boring after all and not conducive to an interesting read on a Friday afternoon. Instead, let’s evade this topic and push on to describing operation codes in which floating point numbers are used as operands (parameters or arguments to a function). In this case, these op codes may need to be emulated just like the ones used by the CPU. Only this time, if the encoding standard differs from the host processor (in other words, if it doesn’t use IEEE 754 and you’re using a PC to run DOSBox), then the emulator will need to convert that format into something it can use natively, which an expensive operation (in terms of adding processing cycles and increasing execution time).
The take away piece of information for all this is that floating point emulation can be slow in the worst case scenario, and fast in the best case. In either case, it’s not a good topic at parties so let’s just slowly walk away from it…
Hardware emulation. This is certainly one of the more interesting layers in DOSBox and also the most prone to hacks and tweaks within the source code. The science of taking an analog output and reproducing it digitally is prone to approximations, since the nature of analog is approximate and being digital is exactly the opposite. Most of the analog operations come from sound cards where the device is capable of producing a variety of analog wave forms which is used to create music and sound effects.
The operating system layer. Before the days of operating systems employing graphical user interfaces to shield the user from arcane console commands, and providing a host of time wasting games like Solitaire and Mind-sweeper, a sizable chunk of the PC market used DOS. Whether that was PC-DOS, MS-DOS, or DR-DOS is not really that important since the other versions typically remained closely compatible with MS-DOS. The DOS platform offered a host of utility programs to the user, along with a few drivers which included support for specific implementations for memory management, mouse drivers, and access to generic CD-ROM drives. Users could always install a specific version of a driver for piece of hardware they bought, like a Sound Blaster card, and after setting a jumper of two to configure interrupt and DMA channels, they would be off to the races. Unless something went horribly wrong…
Unbeknownst to many users but knownst to a few geeks around the Megaverse, the motherboard BIOS code provides a set of default drivers so that the boot sequence and the operating system can access a few essential devices like the hard drive, floppy drive, and keyboard when they start up. These drivers are usually ignored or replaced by the operating system so that it can provide its own, more advanced versions, but some of them are still used when your Windows operating system boots into safe mode, for example. The kernel, which is a core component of any operating system, provides mechanisms for switching between drives, accessing disks and partitions, and in the case of DOS, providing fixed names for devices like “LPT1:” (printer), “COM1:” (communications port), or “NUL:” (the abyss). These device names and indeed the drivers themselves provided a level of abstraction for the user and higher-level programs. The user could print a text file, for example, by issuing a command like “TYPE FILE.TXT > LPT1:” directly as a shell command, but they could also use a program like WordPerfect which has its own set of specialized printer drivers, so that it could do more tasks requiring advanced printing like graphics and italicized or bold lettered text.
DOSBox provides limited support for a few of these commands, but really it’s only enough to get your games up and running since that is its modus operandi after all. These commands can take one of two forms: an executable program or a keyword command available in the shell. I provide the list of available keyword commands in the Shell section below.
The interrupt layer. Much of the hard work when creating DOSBox probably rose up from the requirements around the CPU/FPU emulation, DOS and hardware abstraction support, and the support for interrupts. The bulk of the core system code for the DOS lies in supporting the features for every required interrupt function. Interrupts are specific routines which can accept parameters from the calling program and then return the results of the function in special variables. All of this happens by loading up certain register variables and invoking the interrupt CPU instruction. If you were to embed a small assembler routine into a C function, it may look like this:
void mouse_hide(void)
{
asm {
mov ax,02h;
int 33h;
}
}
In this example, the value “02H” is being loaded into the AX register which represents the interrupt function to hide the mouse cursor, and the interrupt “33H” is an entry in the interrupt vector table to access available mouse functions (it acts like and index). DOSBox supports many interrupt functions, but their focus is around the ones necessary to run your favourite games. The important thing to remember about interrupts is that they do exactly that, they interrupt the CPU and force it to run the requested function.
Without going into too many technical details, interrupt authors generally follow two design rules: the functions must execute quickly and you shouldn’t call an interrupt from within another interrupt. The programmers working on DOSBox need to implement those interrupt functions in whatever way makes the most amount of sense on the running platform. So, if the game invokes an interrupt requesting a change of screen resolution and color mode, then the DOSBox emulator needs to adjust the resolution of the game window and invoke software support for VGA and EGA video modes, or a nice CGA video mode with a four-colour palette. Pretty.
The abstract front-end layer. This would be the charming side of DOSBox, if the project actually provided a graphical user interface out of the box. Instead, they have designed it one level deeper and abstracted the program’s front-end so that it could use different media and windowing libraries provided by the host’s operating system. By default, it uses the SDL library (SDL stands for Simple DirectMedia Layer) to handle the creation of the application container, window frame, sound, input functions and graphics modes. And lucky for them, SDL is available for Linux, Mac OS X, and Windows (and varied support for other platforms too), so there may never be a reason to move to a different library… until the SDL project is retired or if their senior programmer gets hit by a bus. If the project didn’t use a library like SDL, then the application would be tied to a specific set of operating system libraries, or it would need to provide an assortment of implementation modules for each OS target that followed a nice, clean little interface… like the ones provided by SDL.
I’ll pause while you give a big hug to the people working on that project. Don’t you feel better now? Wait, it wouldn’t be right to ignore DOSBox, since they are the stars of this little side show. Let’s spread the love around and try not to get too messy in the process.
The scaler layer. Strictly speaking, I wouldn’t really call this a layer, but the architecture does abstract it somewhat, and a lot of people like this feature so it’s worth discussing it a bit. When you decide to fire your favourite game up in DOSBox, you may notice the Window it creates is a little on the small side depending on your host’s current resolution. Wouldn’t it be nice if you could make the window bigger and still have it look good? That’s the job for the scaling routines. They take what would normally be a pixelated image (assuming you don’t like that sort of thing) and smooth out some of the rough spots. As with most scaling routines (other than piecewise-constant algorithms like nearest neighbour), there can be a bit of blurring but if the algorithms use a smaller set of surrounding pixels for their sample set, like an EPX or Scale2x routine then the result looks quite good and still maintains an acceptable level of sharpness and detail.
The shell layer. The shell provides a sub-set of the total native keyword commands available to DOS: DIR, CHDIR, ATTRIB, CALL, CD, CHOICE, CLS, COPY, DEL, DELETE, ERASE, ECHO, EXIT, GOTO, HELP, IF, LOADHIGH, LH, MKDIR, MD, PATH, PAUSE, RMDIR, RD, REM, RENAME, REN, SET, SHIFT, SUBST, TYPE, and VER. It also provides an execution environment for running batch files.
Hopefully, when you fire up your next DOSBox powered game (a number of product use this software, including game services like Steam), you’ll think of the long hours and tedious bits of programming that went into developing this stellar product, and maybe choose to send a bit more love their way this Christmas season.
Categories: DOS, Programming, Software
No Comments »
The Revolutionary Guide to Bitmapped Graphics
December 29, 2009This is another book from my library that I have decided to take a look back on and see if there are any useful tidbits to be used by programmers today. As with most technical books which are more than ten years old, there is usually an abundant amount of information about specific technologies which are no longer in popular use, or perhaps the technologies are still present in one form or another but the means to access them have changed dramatically. I personally believe that many of these books can give the novice programmer a background not taught in universities and colleges and will certainly give them an edge when working on limited or older machines.
The book does talk about video hardware used in that time period and delves deep into the programmatic underpinnings when accessing the display and creating custom video modes. I found some of the discussions to be noteworthy but if you really want a thorough explanation, you may want to investigate the Zen of Graphics Programming or the Graphics Programming Black Book. It also delves into a bit of assembly language primer, which is very typical for these books, since many of the routines were coded using that language. The introduction is short but may be a nice refresher for those who haven’t gotten their hands dirty in a couple of years.
I’ve made a list of what was still useful for work you may be doing today – unless you’re one of the lucky few who get to maintain software written in 1994. Your mileage will vary as some of the techniques are really just short introductions to a much larger field like digital image processing (DIP) and morphing. It even had a short introduction to 3D graphics, which seemed to be slapped on at the end because the publisher wanted “something on 3D” so they could put it on the cover.
- It provided color space introductions, conventions, and conversions for the following spaces: CIE, CMY, CMYK, HSV, HLS, YIQ, and RGB. Most of the conversions go both ways (to and from RGB space), although CMY/K conversion calculations are only provided from RGB space.
- Dithering and half-toning, followed by a chapter on printing. I think the authors mentioned Floyd-Steingberg in there somewhere, but it wasn’t a full discussion.
- Fading the YIQ and HLS color space. I’m not sure why they didn’t provide one for the RGB space, but it could very well be on the bundled CD-ROM.
- It introduces the reader to a few algorithms for primitive shape drawing and clipping, like Bresenham line drawing and Sutherland-Cohen clipping. It also included discussions and examples for ellipses, filled polygons, and b-spline curves.
- Extensive discussions on graphics file formats for GIF, JPEG, TGA, PCX, and DIB. Although these tended to be higher-level than what would have been useful for someone implementing a decoder for any one of these formats (with the possible except of PCX). Associated algorithms like LZW and RLE are also explained as they are used by encoders of these formats.
- The topic on fractals and chaotic systems was a little out of place, but was a little more extensive than the chapter on 3D. It did explain the concept of an L-system fractal, and even provided a generator for it. When supplied with a configuration file, it could produce fractals like the von Koch curve. It briefly touched on the Harter-Height Dragon fractal and introduced the Mandelbrot and Julia sets, but didn’t delve into chaos theory, even though I’m sure one the authors desperately wanted to do so.
- Related to the discussion of fractals was the section on generated landscapes via the midpoint displacement method. While not a landscape per se, the authors digressed a bit to talk about cloud generation as well.
The book finally managed to get around to the reason I bought it in the first place many years ago, which was the all too brief chapter on DIP techniques. It quickly introduced and provided code for algorithms like the Laplace filter, as well as popular effects like emboss, blur, diffuse, and interpolation. The treatment was very light, so the reader will not walk away with a solid understanding for any of the example code, other than trivial effects like pixelate or crystalize.
Categories: Books, Graphics, PC, Programming
3 Comments »
Racing the Beam
May 16, 2009I just finished another great book the other day, entitled Racing the Beam: The Atari Video Computer System by Montfort and Bogost. It’s an inside book about some the development challenges and solutions when writing games for the Atari VCS. This is a unique machine and is often considered one of the most difficult machines for a programmer to cut their teeth on. With 128 bytes of RAM and an average ROM size of 2, 4, or 8K, you must fight tooth and nail of every byte used by your software. What lengths do some programmers go to skimp and save on bytes? Ever thought about using the same byte for both an opcode and a piece of data? Ever thought about using the opcodes and operands found in the code segment of your program as data, which gets fed a pseudo-random number generator or to produce a rendering effect because you didn’t have the spare space in ROM to place this stuff into the data segment? Well, neither did I until I read this text. Along with little gems like this, the book has a number of interesting tips and tricks into the how and why of software development for the Atari 2600.
The book centres itself around the idea of a platform, and how the constraints and peculiarities of a system can affect how a game is presented. Game adaptation, especially when you’re trying to port software from one hardware architecture to another, is a very important topic when you’re trying to maintain the look or feel of a game. Sometimes, neither is possible and you’re forced to go your own road and come up with something completely different.
A word of caution, though. This book will not teach you how to write software for the 2600 system. It is not a technical reference by any means, nor does it advertise itself as one. However, I would heartily recommend this title to anyone thinking about producing a game for that system, or those of us with an inner geek needing to be satisfied.
I love the idea behind this series of “platform” books as I have often wished for such books to be written and have even contemplated writing one myself just to fill the void. One of the most useful parts of this book is the reference section which can lead you to all sorts of new and interesting articles, books, or projects. I do hope the next book contains a bit more technical detail while keeping thevarious bits of historical data and interesting character references which really helps to tie the why and the how of the topics together.
Categories: Atari 2600, Books, Programming
2 Comments »
Computer Virus Research
December 21, 2008As part of being a well-rounded programmer, I dabble in all sorts of technical things. One of my areas of interest is computer virus research. In the last thirty years, I have witnessed a large number of changes to this industry, and I find myself compelled to write a little bit about it today after reading about a couple of courses offered at the University of Calgary.
As it exists today, computer virus defense is wide collection of software programs and support networks which are offered to companies and users for the sole purpose of protecting their data from loss, damage, or theft from a myriad of small computer programs called computer viruses. These programs must have the ability to replicate (either a copy of themselves or an enhanced version) and which often carry a payload. The means by which a computer virus can replicate are complicated and often involve details of the operating system. In addition to preventing virus outbreaks from occurring, anti-virus software is also used to help prevent service outages and ensure a general level of stability. In other words, they are selling security or at least one form of security, since security in general is a very large net which cannot be cast by only one program. As an aside note, please be aware of the tools you are using for anti-virus protection. With some research and a little education, it’s often not necessary to purchase these programs in the first place.
I am currently reading Peter Szor’s book entitled, The Art of Computer Virus Research and Defense (ISBN-10: 0321304543). I am almost finished the text and I have found the book to be incredibly informative; filled with illustrations and summaries for all sorts of computer virus deployment scenarios, technical information about individual strains, and historical pieces of information as to how the programs evolved and mistakes made by both researchers and virus writers.
Even though I have the skills and the opportunities to do so, I have never written a computer virus for the purposes of deployment, nor do I ever wish to do so, but I can tell you that writing an original computer virus is challenging work; writing a simple virus is easy. Isolating, debugging, and analyzing the virus is also interesting work, albeit somewhat more tedious. Both jobs require similar skill sets, detailed knowledge of and low level access to a specific system.
I used to posit that the best virus writers would be the people who have taken it upon themselves to write the anti-virus software. After all, the best way to ensure the success of a business built on computer virus defense is to construct viruses that can be easily and quickly disarmed by your software. Much to the disappointment of conspiracy theorists, this is probably not the case, since fellow researchers would easily link a pre-mature inoculation with a future virus outbreak if it happened too often to be mere coincidence. However, if your business was based on quick and successful virus resolutions, then timely outbreaks followed by timely cures would seem to solidify the business model. Personally, I think anti-virus researchers are kept busy enough with “naturally” occurring strains to necessitate a manual jump start of the industry. Although that could change as users and technology platforms become more advanced, although the more probably route is the disappearance of the anti-virus industry; we live in a messy world and there may be opportunities for those wanting to leave their mark, even in the face of futuristic technology gambits.
Computer virus writers are plagued, somewhat ironically, by numerous problems with deploying their masterpiece. A computer virus can be written generically so that it can spread to a wider variety of hosts, or it can be written for a specific environment, which can include requirements on the hardware or software being used. Dependencies on software libraries, operating system components, hardware drivers and even specific types of hard-disks are all liabilities and advantages for a virus. They are liabilities because dependencies limit the scope of infection so the virus spreads more slowly, but at the same time, they often enable the virus to replicate, since the virus may be using known vulnerabilities or opportunities within these pieces to deliver the payload or as as means to allow for it to spread.
Virus research, writing, and defense is a fascinating topic. Unfortunately, I find the pomposity, and to some degree the absurdity, in various branches of the industry to be laughable and a little scary at times. In case you haven’t heard, the University of Calgary is offering a course on computer virus research. While I find this to be a refreshing take on education, my hopes are quickly dashed when I read the requirements and the Course Lab Layout (warning PDF monster). Do they think their students are secret agents working in a top secret laboratory? Of course they do, why else would there be security cameras installed in the room, and why do they restrict access to the course syllabus? Well, I’ve got news for the committee who approved the layout of the lab, and who probably approves the students who can attend the course: computer viruses are just pieces of software. That’s right, they’re just software. They don’t have artificially intelligent brains, they can’t get into your computer by the power lines, and they are quite a bit less complicated than your average word processor. This means that any programmer with the desire and a development environment can write a virus, trojan, or any other form of malware. They don’t need to take your course and they don’t need access to your Big Brother Lab.
The absurdity of protecting information which is already publicly available and has been for decades makes me want to laugh out loud and strangle someone at the same time. It’s rather disturbing and I really don’t like the idea of closing doors on knowledge, even if the attempt is futile. The University of Calgary’s computer science department should be ashamed at perpetuating such ignorance within a learning institution, and I am truly disappointed how bureaucratic such systems have become.
Update 12-29-20008:Â To respond to a verbal conversation I had with a couple of people: I understand why the university placed the security restrictions in the program; they want to validate the program and make it appear legitimate to the community and their peers. That’s fine, but at the same time, it must be acknowledged that the secret to mounting a successful defense against viral software and Internet based attacks is shared knowledge and open avenues for information. Understandably, this information will go both ways, but the virus writer will gain nothing they do not already possess (except the knowledge that we know what they are doing), while the general public may be a little more aware of the problem than they would be without this information.
Indeed, using viral kits and small customization programs can make viral programming easy for the layman or immature programmer, but we shouldn’t be locking away information about these techniques or programming practices simply because the result is something undesirable or easy to dispense. There are real opportunities to learn and disseminate this knowledge today, and the bigger the audience, the larger the opportunities for successful anti-viral software and general consumer awareness which will combine to create the most effective vaccine of all: knowledge.
Categories: Programming, Reflections
No Comments »
Qt for Games – Niblet #1
July 19, 2008A few days ago, I decided to experiment with Qt as it applies to game development. You haven’t heard of Qt? Well, it’s an SDK for cross-platform development which has really started to take off in the cross-platform arena over the last couple of years. KDE, a desktop platform for Linux, is based on the Qt library; so is the VOIP application called Skype. They even have a mobile platform called Qtopia for those who are interested in small devices. You can find out more information on the Trolltech website.
To begin the project and to get the brain juices flowing (after the second cup of coffee, of course), I wanted to come up with a small, interactive demo which would make use of some of the classes related to QSceneGraph and QGraphicsItem. I also wanted to play around with the layers a little bit and see how suitable they would be for games. The first program is a simple one indeed; in fact, it’s not really a game at all – unless of course you’re really bored and you start to chase invisible dots on your screen. Just run demo, and you’ll understand what I’m talking about.
I am basing this project off a classic so that I don’t stray too far off topic. MS-DOS 5.0 shipped with a QBasic game called Nibbles. The game play revolves around a worm that you control through a maze of walls while looking for food. If your worm collides with a wall or itself, you lose one life. The difficulty revolves around the worm’s velocity; it gets faster as you advance. The idea is to use that basic game as a model at first and build on it a little as time goes on; this will help to keep the project focused, which can be a real challenge for those who are new to game programming. To get things started, I have decided to dub the codename for the project: Niblet. The real name will be decided, if and when, this Qt game comes together.
The first problem is the worm, of course. It needs to snake around and grow to certain lengths when food pellets are ingested (or numbers as was the case in the original game).; I also like the idea of it shrinking in response to an item collected during the progression through a level. Anyway, the rules can be fleshed out later. For now, let’s just get a very basic framework up and running. To compile the little demo, just run qmake and then make if you’re on Linux or open up the project in Xcode if you’re using Mac OS X; Windows users can open the project in Microsoft Visual Studio. If you’re using Mac OS X Leopard, I suggest you try out Qt 4.4 which will produce a proper project file for Xcode 3.1.
As you can see, very little code is needed when using Qt. If you were to write the same program for Windows using DirectX or even through the GDI, you would need considerably more code just to handle the application start-up process and surface initialization. But that’s the beauty of these SDKs, they essentially handle a lot of the routine programming so you can concentrate on building your software faster and better using these handy libraries.
Categories: Game Development, Programming
No Comments »
Building Software with DOSBox?
May 12, 2008I have written about DOSBox in the past and I have nothing but good things to say about it. It’s a great project whose people continue to push for greater compatibility, speed and functionality. Over the weekend, I tried to use DOSBox as a development environment for compiling DOS applications using the DJGPP 32-bit compiler and associated development suite such as RHIDE, Allegro, GRX libraries, etc.
Sadly, it didn’t work out so well since a number of common tools simply didn’t function that well in v0.72. I’m not terribly disappointed, since the emulator takes forever to compile a project in its environment, and wouldn’t be suiteable for long term development. I’m raising the issues here in case anyone tries to use DOSBox in the same way.
First, let’s talk about what did work. DJGPP for starters. It works just fine; albeit, the compilation and linking process does take a while. Some might argue that this is all you need to develop a project, and while that might technically be true, it certainly doesn’t work out for anyone trying build something more than a “Hello World!” application.
So, what didn’t work? Well, I tried four different editors: edit (the one which ships with your MS-DOS operating system), TDE (Thomson-Davis Editor – one of my favourites), MEL (Multi-Edit Lite – another great little editor), and RHIDE (the development environment which you can install alongside DJGPP). I didn’t try VI/VIM, but I might sometime later just because I’m curious.
First out of the gates, Edit simply wasn’t suitable in an environment where tab characters are important. It simply replaces them with spaces and continues on its way. Too bad since the editor is simple and to the point. Although, it really isn’t a good solution since it lacks one of the most basic development features like syntax highlighting.
DJGPP makefiles will not tolerate spaces when there should be tab characters. No, it’s not being quirky, it’s in the POSIX standard after all but it doesn’t make it any less annoying. So, the solution is to switch to an editor which respects tab characters and does not try to convert them to nasty spaces. So I jumped to a more sophisticated editor: TDE. Sadly, I was let down again, due to a bug (could be within TDE or DOSBox) where the cursor was not visible while using the editor. It was a little like fumbling in the dark trying to find that blasted light switch.
After some research I found MEL, so I installed it and tried it out. The editor was well laid out, the cursor showed up, and it respected my tab characters. Super, finally something I can use! Alas, I soon discovered that our relationship would never last. You see, MEL doesn’t use tab characters out of the box, you need to turn it on through one of the configuration dialogs. No problem, right? Well, it doesn’t seem to remember the setting once you save it, so you need to set it every… single… time you load the editor. Hrrmph.
“Well, it’s time to bring out the big guns!” I thought. I had used RHIDE in previous development projects and found the environment quite enjoyable for the larger ones. I typed in the command to fire it up and… whammo! Congratulations, it’s a bouncing baby SEGFAULT. Sigh. I don’t know about you, but four editors is enough experimentation for one day. If you find an editor which works well and meets my humble requirements, please post it in the comments.
Anything missing which should be added? Well, you’ll certainly want to add in some basic DOS utilities such as XCOPY, DELTREE, ATTRIB, and what not. It will make your life much easier. It would also be great for DOSBox to incorporate a configuration reload command, so you don’t need to exit the emulator in order to reload the preferences file. For now, I’m going to use one of my older development boxes for this software project, and stick with DOSBox to run my games.
Categories: DOS, Programming
2 Comments »
PGP: Source Code and Internals
April 28, 2008My copy finally arrived the other day and I am elated. Now, before you make another hasty buying decision based solely on my opinion, and on a single line of text, there are a couple of things worth mentioning about the book. First and foremost, the book does lack a few structural elements like plot and little things like paragraphs. Having read about the history of Philip Zimmerman and his toil with the U.S. government, I already knew this before I purchased the book.
It does contain source code. Lots of code. The entire PGP program, in fact, including the project files. The book was sent to print because the digital privacy laws the government was attempting to enact at the time, did not cover the printed page. If all you want is the source code, you can simply find it on the Internet. Looking for a specific version, especially an older version, is more difficult and may be fraught with export restrictions.
The story behind the printing of the book is a fascinating history lesson and one we should all be concerned about, even if you live in another country, since we all know governments are not terribly adept at learning from their mistakes.
Categories: Books, Programming, Software
No Comments »
QBasic
December 5, 2007When I mention QBasic to some people, they immediately think I’m talking about Quick BASIC. The two products, however, are a little different. They were both created by Microsoft but Quick BASIC is basically a super set of QBasic. QBasic has an interpreter and an editor built as one package; I hesitate to call it an IDE since your projects could only use one module at a time. It was also limited in the amount of memory available to the program and the amount of memory available to the editor. I experienced the latter problem only once while creating a game involving viruses and robots (I didn’t get around to naming it); the editor just started losing lines of code I had written and was behaving eradically. Eventually, I became frustrated and moved on to other and presumably smaller projects.
QBasic made its first appearance with MS-DOS 5.0. It came with a few example programs and games. One of these games was called Nibble. I love this simple game, even to this day. It’s a little similar to games like Centipede, although the game itself is far too simple to make a reasonable comparison. The goal for each level is to gobble up the numbers that appear in random locations. Each time one of those numbers gets consumed, your “snake” grows a little longer. You have to avoid running into the walls of the level, which get more complicated as you progress, and you must not run into yourself. As you attain higher levels, your snake becomes faster and faster. This game was never synchronized with the system clock, so if you play the game on a machine made today, it would move around so quickly as to render the game unplayable.
I have often thought a game like Nibbles would make an excellent game to practice your porting skills on other platforms. It is sufficiently interesting to make the project worthwhile and could be adapted to play well using almost any input device. It could also be rendered using a simple text mode, just like the QBasic version, or you could enhance it in a graphics mode using imagery, vector graphics, etc.
QBasic also introduced the concept of functions and other forms of structured programming. GW-BASIC could only remember and execute your program if each line was prefixed by a number, or right away if you are using instructions in immediate mode. As you added and removed lines from your code, there were a couple of functions to reorder or renumber your line numbers when you ran out of room.
The concept of a function didn’t really exist in GW-BASIC; instead, it allowed you to jump to a particular line number using commands like GOTO or GOSUB. The latter was more like a function since you could jump to a specific region of code and then return from that function when the code had finished. GW-BASIC also supported one line functions which were handy for calculations. Although QBasic could still use line numbers, it encouraged the use of named labels for functions instead. Despite my work with Amiga’s BASIC, I still preferred the old way since I had been doing it for so long. It took me a while to adjust to the new program structure at first, so I purchased a new QBasic book after upgrading and essentially dove in head first.
Interrupts would become increasingly important for me in the future, but at this time I knew little about them. QBasic had no direct interface for handling interrupts, but it could handle interrupts used by the system’s timer:
ON TIMER(n) GOSUB MySubRoutine
Having no functionality to manipulate interrupts meant there were no functions to gather information about input devices like the mouse. Despite this seemingly major failing, all was not lost. While it could not handle input from the mouse directly, you could make use of a machine code sub-routine which could get the information you needed, like position and button states. You could use techniques like this to gather information from other devices.
QBasic also introduced me to one of the greatest time saving features ever created: the debugger. A debugger can be a separate program or feature within an IDE which allows you to trace through your program and examine variables and addressable data as the software executes. One of the core features of a debugger would be the ability to set a break-point at a specific addressable location that corresponds to a precise line within your source code. Before debugging, I was tracing through my program by hand and using PRINT commands to dump the contents of a variable. Even today, there are professional programmers who don’t use a debugger, either by choice or lack thereof, and choose to examine how there software operates by sending information to a log file or an output stream of some sort.
Categories: PC, Programming, Reflections
No Comments »
GW-BASIC
November 29, 2007I first discovered this little gem while poking around on my Tandy 1000 RL computer back in 1991. Because I was familiar with various versions of BASIC already, I was able to fire it up and immediately begin writing fairly simple applications. However, there were differences to Atari’s version of BASIC, and no discernable way to figure that out without a book or some other documentation. I poked around on a few of my favourite Bulletin Board Systems (BBS) and found a small cache of programs written in GW-BASIC. I downloaded each of them, spending all of my download credits in the process. I poured over them line by line and found myself having more questions than answers.
Many new programmers today expect there to be some sort of documentation available when they learn a new technology. It’s an expectation that has evolved over the years. Today, it would be practically unheard of if you couldn’t find some resource describing the software on the Internet, or some blurb in a book, or even bundled with the product. However, when I started looking for a book on GW-BASIC (version 3.23 to be exact), it was darn near impossible. The people I spoke with had no understanding of programming, let alone a specific language. The computer sections in the store were spartan compared to the sections you find in stores today. In fact, several years later, I still needed to special order the books I wanted through the book store; even today, I usually order my titles through Amazon since a store like Indigo usually doesn’t have them in stock.
I eventually wandered into a local computer store – I was attracted by a demo of Wing Commander II playing on one of their expensive new 486 machines, so I went in to ask them if they knew where I could get my hands on some material. I spoke with the owner and he seemed to recall seeing a book for BASIC in the back of the store. He left for a couple of minutes and returned with two books under his arm. One was for the exact version of GW-BASIC I was looking for and the other was a compatible book for MS-DOS. I was ecstatic and I nearly fainted when he gave them to me for free. I don’t remember what I said to the man that day, but I’m sure it wasn’t adequate.
I wrote so many programs in that environment. I think I still have a few of them today on floppy diskette. They were simple at first, like simple word games such as hangman. However, it wasn’t long before I discovered how to increase the resolution and colour depth and draw simple graphics. I had already written code for the Atari which made use of pixel plotting routines or simple geometric shapes. There were pre-canned routines the Atari provided for drawing shapes, along with a few parameters for style, colour, and shape. GW-BASIC was similar but provided several more commands and options. With a smaller font and a higher resolution, debugging within BASIC became more feasible. I still couldn’t scroll, but at least I could view a larger portion of the code on the screen.
Sound was also possible through the PLAY command, or if you were so inclined, through a custom routine written in assembly language which could be written to generate all sorts of sounds like noise or even speech. I had converted several songs using music sheets and converted them to their GW-BASIC equivalent. Not terribly exciting, but it did make for some creative demos which seemed to impress onlookers.
Some of the more interesting projects involved controlling external devices like printers. I must have programmed my software to use every conceivable option available through my Star NX 1000 II dot-matrix printer. I could instruct it to use standard type effects like bold and italics when printing text, but I could also program it to print graphical data like images and shapes. I even created printed graphics for the Mandel Brot set by rendering the fractal to my printer instead of the screen, one pixel at a time.
Perhaps the most interesting device was a robotic arm which was controlled through a series of commands dispatched through the parallel port. I didn’t have access to such a device at home but my school eventually purchased one, so one of the teachers decided to create a contest for students. Whoever could program the robotic arm first, would win a special prize. The task was to pick up a piece of chalk and draw a picture consisting of at least four shapes. It was a fun contest and I walked away with a pass which granted me as much lab time as I wanted.
Before I upgraded to a new machine and moved to a new version of MS-DOS, I was intimately familiar with every command in that book, even the more obscure commands like CHAIN and PCOPY. Although, the really obscure commands were still a mystery. I didn’t know you could actually write assembly language routines in GW-BASIC using DEF USR and USR commands until sometime later.
Categories: PC, Programming, Reflections, Software
No Comments »
Neverball
September 28, 2007While I’m waiting for OS installations or code compilations to finish, I’ve been playing this great game on Xandros called Neverball. It’s a game similar to Monkey Ball, which is one of my faviourites on the Game Cube. It’s very polished for an open source game and is complete with the exception of minor enhancements or bug fixes from time to time. Its only dependencies on Linux are the most common SDL libraries: core library, SDL_image, SDL_ttf, and SDL_mixer. It doesn’t compile so well on Intel Macs, but I should get that working pretty soon. I believe there is an Xcode project file available too for those who choose to take the quick and easy path. I may soon walk the path of the dark side, depending on how quickly I can resolve these obtuse linker errors.
Categories: Games, Programming
No Comments »