Welcome to the Datatron 205 and 220 Blog

This blog is a companion to T J Sawyer's Web Page that outlines the history of the Burroughs Datatron 205 and Paul Kimpel's incredible 205 and 220 emulators. Please visit those sites and return here to post any comments.

Sunday, November 26, 2017

Emulator Timing Improvements

Version 1.00 of the retro-205 emulator was released at the end of March this year. The blog post accompanying that release described most of the technical changes in the release, then promised a second post on the subject of emulator timing.

This is Paul Kimpel with that long overdue second post for Tom's 205 and 220 blog.


The Significance of Timing


One of the design goals of the retro-205 emulator has been to run programs as close as possible to the real speed of a 205. Modern browsers can run the emulator much faster than that, however, so the speed must be slowed, or throttled, to meet that goal. This post describes how that is accomplished in the emulator, along with the improvements to timing accuracy that were introduced in version 1.00.

The effort to improve timing accuracy started early this year when Tom was experimenting with some prime-number programs and noticed they were running 20-30% faster in the emulator than he suspected they should. That set off a series of studies and experiments on both our parts to determine (a) how fast his programs theoretically should run, and (b) why the emulator was running them significantly faster.

Before getting into that, though, I first need to describe how the emulator regulates the speed with which it runs 205 programs.


Controlling Speed of Execution in a Web Browser


Most people by now know that a web page is constructed using a textual notation known as Hypertext Mark-up Language (HTML). That notation defines the elements we see on the page -- blocks of text, images, form fields, etc. You can think of a "web application" as a set of web pages with a dynamic component. Inside the browser is a programming language, Javascript, and an API, the Document Object Model (DOM), that allows Javascript code to access and manipulate the elements on a page.

The Single-Threaded Nature of Browser Applications


Javascript and the DOM were originally designed to enhance user interfaces by reacting to user-generated events -- mouse clicks, key presses, and the like. Based on its interpretation of those events, Javascript code can use the DOM to access and alter the elements on the page. People quickly realized that this capability could do a lot more than just enhance a static web page -- it could be used to build robust and sophisticated browser-based applications that do significant internal processing in their own right.

As a consequence, the facilities and techniques for building such applications have improved dramatically over the past 20 years. What started simply as a way to create an interface to an application running in a web server has in many cases become the application itself. The fact that we can now use a standard web browser to, say, model and emulate a computer system like the 205, let alone implement something as polished as Google Docs, is impressive, if not downright astonishing.

Browser-based applications still retain their event-driven nature, however. They are generally not structured the way that most off-line applications are. All processing for the set of windows making up an application (the emulator home page and all its sub-windows in our case) is strictly single-threaded. Only one thing runs at a time in that application -- there is no multi-threading or parallel execution. This is an intentional feature of Javascript, to simplify the programming model and avoid the need to coordinate multiple parallel threads of execution.

Thus, most browser applications are structured as a set of functions that are called one at a time from the browser's Javascript run-time controller. There is generally one global block of code that runs when the main page of an application first loads. That code links functions to handle events that will occur when the user interacts with the elements of the page. The code can also schedule functions to execute after a specified period of time. The global code then exits back to the Javascript run-time, leaving the functions it has linked to run later as events occur.

To maintain its single-threaded behavior, the Javascript run-time maintains a queue both of events that have occurred but not yet been serviced and of scheduled activities that have not yet come due. Each of these events and scheduled activities has a handler function associated with it, and as it rises to the top of the queue, the run-time calls that function.. These handler functions will do appropriate processing for their events, and may of course link or schedule still other functions for future execution.

Once a handler function exits back to the run-time, the run-time then figures out what to do next. That may be to pluck another event out of the queue or do some of the browser's background processing. It may be there's nothing to do at the moment, in which case the run-time will wait for the next scheduled function to come due or for some external event to occur, such as a click in the user interface.

Another thing about Javascript functions in a browser is that you can't run them continuously. Because there is only one thread of execution, a function that runs continuously will block other functions from running, which would prevent any user events from being serviced or any scheduled activities from taking place. To prevent that, most browsers will allow a function called from the Javascript run-time to execute for 15-30 seconds, and if the function hasn't given up control (i.e., exited back into the run-time) in that amount of time, the run-time will assume that the function is caught in an endless loop. The browser will pop up an alert and ask the user whether they want to continue executing, abort application, or bring up the debugger. Thus, a web application needs to be designed so that it returns to the run-time controller on a regular basis in order to avoid the run-time declaring it scriptum non gratum.

You can think of the emulator as a program that runs in loop, iteratively simulating the execution of 205 instructions. Internally, though, the emulator is not structured around a loop. Instead, it's constructed as a set of functions that are called from the run-time, execute for a bit to accomplish some piece of work, perhaps schedule themselves or another function to run later, and then finally exit back into the run-time. The iterative behavior comes mostly from the run-time repetitively servicing its event queue, and from the handler functions continuing to schedule other functions that will run at some point in the future.

Regulating the Speed of Emulation


Throttling the performance of an application in a web browser is a bit tricky. The way that you do that is to introduce delays into a script's execution. The way that you introduce a delay is by scheduling some function to run after the desired delay, and then exiting back into the run-time. The application will then sleep until the next item in the run-time's queue is selected.

Functions are scheduled to run in the future by calling the Javascript setTimeout() intrinsic function, passing it a number of milliseconds to delay and the function to call after that delay has elapsed. The actual delay will be at least the amount of time you specify, because the run-time only runs one thing at a time, and it may be that some other function or background task is already running when the delay expires. The scheduled function must wait for any currently-executing and queued activities to be completed first. Thus, the run-time will call the function you passed to setTimeout() after the specified delay and after nothing queued ahead of it is still pending.

Therefore, the trick to throttling emulator performance down to an appropriate speed is to break up the design into functions that can be scheduled after appropriate delays, and to keep rescheduling those functions in order to keep the emulation running.

Fortunately in the case of the 205, we have delays happening all the time due to the relatively slow drum memory, so there are plenty of opportunities to give up control and exit back into the run time. This also nicely prevents the emulator from hogging the single Javascript thread and having the run-time declare it non gratum.

Significance of the Memory Drum to Instruction Timing


To understand how drum timing interacts with performance throttling, let us first review the drum memory and its timing:
  • The drum rotates at 3570 RPM. It contains 24 tracks of 200 words each. An 11-digit word on the drum is the smallest unit of data that can be addressed, so it is also referred to as a sector.
  • At 3570 RPM, the drum completes a revolution in 16.8 milliseconds. Since there are 200 words on a track, a word passes under the read/write head once every 84 microseconds. This unit is referred to as a word-time.
  • 20 of the tracks comprise the main memory, with word/sector addresses 0 to 3999. There is a combined read/write head for each track.
  • The other four tracks comprise the high-speed loops, or just the loops, and effectively store 20 words each. These tracks are addressed differently than main memory, however -- each loop is addressed modulo 20 within a 1000-word address range. Thus, the tracks are commonly known as the 4000 loop, the 5000 loop, the 6000 loop, and the 7000 loop. Addresses 6018, 6278, and 6538 are all in the 6000 loop, but all of them identify the same sector on the drum because all those addresses are congruent modulo 20.
  • Instead of having a combined read/write head, the four loop tracks have separate read and write heads, placed 36 degrees apart, so that words written by the write head can be read by the read head 0.1 revolution (1.68ms) later. Normally the words read by the read head are fed back (“looped”) immediately to the write head, with the result that the words are duplicated 10 times around the drum. Hence, while each high-speed loop can store only one-tenth the words of a main-memory track, the drum can access those words 10 times faster than main memory.
  • When the processor requests a memory access, it must wait for the drum to rotate until the desired word is under the head. This delay is termed the latency. On average, a main memory request must wait for one-half a revolution, or 8.4ms. Accesses to the loops, however, must wait on average only one-half of 0.1 revolution, or 0.84ms. Well-written 205 programs tried as much as possible to execute instructions from one of the loops, and to keep frequently-referenced variables in a loop as well.
The timing of instruction execution for the 205 is complicated because of the drum. Each instruction requires at least one memory access -- to fetch the instruction for execution. For instructions that use an operand, a second memory access is needed to read or write the operand. For all but a few instructions, the drum latency dominates the overall execution time -- most instructions will finish before the drum rotates to the location of the next instruction to be executed, even when running from one of the high-speed loops.

So the big question when attempting to determine instruction timing and throttling delays is, "where's the drum at this point in time?"

Design of the Performance Throttling Mechanism


When designing the emulator in late 2014, I decided to implement the drum internally as arrays of words in memory. Then I looked at the drum timing and decided that the latency delays were generally so large, I could combine the delays due to drum latency with the delays needed to throttle performance, and do them together within the memory access mechanism. The way I chose to do this is by maintaining two clocks:
  • The first clock maintains internal, emulated processor time. It counts word-times as a result of instruction fetch and execution. We have reasonably complete information on how the instructions were mechanized inside the processor, so the emulator keeps track of the word-times each instruction requires as it executes. Under the assumption that the emulator will run much faster internally than a real 205 would, this clock should run fast with respect to real-world time.
  • The second clock is the drum -- or the drum is the clock -- whichever way you want to think about it. This clock is real-world time. The drum rotates at a constant rate, so if we convert milliseconds of real time to word-times by dividing by 0.084 milliseconds/word-time and call that drum-time, then "(drum-time) mod (tracksize)" can be taken as the track-relative sector number under the head at that point in real time. Tracksize will be 200 or 20, depending upon whether the emulator is accessing main memory or one of the loops.
Knowing the track-relative sector where the head is now, and knowing the track-relative sector of the address being sought, we can compute the drum latency for an individual memory access. We know the number of words we are going to read or write (either 1 for normal accesses or 20 for block transfers), so we also know the track-relative sector where the memory access will finish, and therefore how many word-times (latency + word count) should be added to the internal processor clock to account for the memory access.

Since we hope the emulation is running faster than real time, we can compute the total real-world time delay for a memory access as:

delay = (internal time) - (drum-time) + (latency) + (word-count)

where "(internal-time) - (drum-time)" is the difference between emulated time and real-world time -- the amount that the emulation is running too fast. We need delay by that amount to allow real-world time to catch up. "(latency) + (word count)" is the amount of real-world time the memory access takes.

Thus, the emulator operates by running unthrottled between memory accesses, then sleeping for a time that combines the delays for drum rotation and performance throttling. It provides a call-back function to the Javascript run-time to continue operation after the delay completes.

You can see how this works in the readMemory() and startMemoryTiming() routines of the emulator’s Processor object. First, readMemory() determines the type of memory access that will take place:
D205Processor.prototype.readMemory = function readMemory(successor) {
    /* Initiates a read of the memory drum at the BCD address 
    specified by C3-C6. The memory word is placed in the D register, 
    and an appropriate delay for drum latency, the successor function 
    is called. Sets the memory access toggles as necessary.
    Note: the D register SHOULD be loaded after the latency delay, 
    but we do it here so that the word will show on the panels during 
    the drum latency */
    var addr;               // binary target address, mod 8000
    var cbCategory;         // setCallback delay-averaging category
    var trackSize;          // words/track in target band of drum

    addr = D205Processor.bcdBinary(this.CADDR % 0x8000);
    this.memACCESS = 1;
    if (addr < 4000) {
        trackSize = D205Processor.trackSize;
        cbCategory = "MEMM";
        this.memMAIN = this.memLM = 1;
        this.D = this.MM[addr];
    } else {
        trackSize = D205Processor.loopSize;
        cbCategory = "MEML";
        if (addr < 5000) {
            this.memL4 = 1;
            this.D = this.L4[addr%trackSize];
        } else if (addr < 6000) {
            this.memL5 = 1;
            this.D = this.L5[addr%trackSize];
        } else if (addr < 7000) {
            this.memL6 = 1;
            this.D = this.L6[addr%trackSize];
        } else {
            this.memL7 = 1;
            this.D = this.L7[addr%trackSize];
        }
    }

    this.startMemoryTiming(addr, trackSize, 1, cbCategory, successor, 
                           this.readMemoryFinish);
};
The property this.CADDR is the address field of the C register, which specifies the memory address to be accessed. The this.memXXX properties represent the memory control toggles shown on the right side of the Supervisory Panel. These are set only for display purposes. cbCategory is a short string that identifies the category of delay we require and is used with the setCallback() function discussed below. The successor parameter is a function that is to be called once the memory access finishes -- it's often the part of the Fetch mechanism that loads an instruction into C or the part of the Execute mechanism that actually executes an instruction after any operand word has been loaded from memory.

Based on the value of the address, the routine selects either the array of words for main memory (this.MM) or the array for one of the loops (this.L4, this.L5, etc.) and sets the appropriate memory control toggles so that they will display on the Supervisory Panel. It also chooses the appropriate track size -- D205Processor.trackSize=200 or D205Processor.loopSize=20 -- and loads the desired word into the Processor's D register. "%" is the Javascript modulo (remainder division) operator.

The this.readMemoryFinish parameter is a small function that will be called after the delay for memory access completes. It does some housekeeping and calls the actual successor function.

The startMemoryTiming() function is common to all of the memory access routines. It handles the chore of computing the necessary latency and delay, updating the internal processor clock, calling this.updateLampGlow() to compute average lamp intensities, and finally initiating the delay by calling setCallback():
D205Processor.prototype.startMemoryTiming = 
        function startMemoryTiming(addr, trackSize, words, 
                   cbCategory, successor, finish, finishParam) {
    /* Starts the necessary timers for the memory toggles to 
    aid in their display on the panels. Then delays the amount 
    of time required for drum latency, data transfer, and amount 
    necessary to bring the processor internal time back in sync 
    with real-world time */
    var latency = (addr%trackSize - 
                  (this.procTime + 
                      D205Processor.memSetupTime)%trackSize +
                  trackSize)%trackSize;

    this.memACTION = 1;
    this.updateLampGlow(this.procTime);
    this.memoryStartTime = this.procTime;

    this.procTime += latency + words + D205Processor.memSetupTime;
                     // emulated time at end of drum access
    this.successor = successor;
    this.scheduler = setCallback(cbCategory, this,
            this.procTime/D205Processor.wordsPerMilli - 
                performance.now(),
            finish, finishParam);
};
addr%trackSize is the track-relative sector number of the desired memory address. this.procTime is the internal processor clock, maintained in units of word-times. Taking the value of the clock modulo trackSize gives the track-relative sector number where the emulated processor thinks the drum is now. B205Processor.memSetupTime is the number of word-times necessary to set up and initiate the memory access internally, so adding that to the internal clock indicates where the drum will be when the processor actually begins looking for the desired sector.

Taking the difference in sector numbers between where the drum is now and the desired address gives us the latency. Depending on the relative positions of the two sectors, however, that difference could be negative, so I add the value of trackSize to make sure the difference is positive, then mod that sum by trackSize in case it was positive to begin with, thus keeping the result between 0 and trackSize-1.

this.updateLampGlow() updates the calculation of lamp intensities for the display panels so that they exhibit a somewhat realistic partial brightness. Based on current register settings, it computes a time-weighted exponential running average of the 0/1 state for each bit in each displayed register. That calculation will be completed by this.readMemoryFinish at the end of the memory access.

The routine updates this.procTime with the latency, memory access setup time, and word count, which gives us the internal clock time (in word-times) when the access should finish. It also stashes a reference to the successor function in a global property (this.successor) for use by the finish function (which is this.readMemoryFinish when called by readMemory) after the delay is complete. Finally it calls setCallback(), discussed in more detail below, which will call setTimeout() to initiate the necessary delay.

That delay should be the difference between the value of the internal clock when the emulated memory access starts, and the value of real-world time when the physical memory access finishes. Since the emulator models the drum internally as a set of memory arrays, the physical time to load the desired word into D is negligible, and in fact was already done in readMemory(). Thus the delay is simply the difference between the internal clock and real-world time. Dividing by the value D205Processor.wordsPerMilli converts word-times to milliseconds, the unit required by setCallback() and setTimeout().

Javascript has a Date object from which you can get real-world time, but instantiating that object every time you need to have the current time results in a lot of overhead. There is a special timer designed for performance measurement, though, performance.now(), that's both efficient and more granular than the Date object, so I use that for all timing within the emulator. It returns a floating-point number in milliseconds.

Managing the Javascript Delay Mechanism


I have gone through this rather long-winded explanation of memory access and timing partly to give you an idea of what's involved in regulating performance in the emulator, but also to talk about the role of setCallback().

The setCallback() function is something I originally developed for the retro-B5500 emulator. Its main purpose is to compensate for two nasty characteristics of the standard Javascript setTimeout() function:
  1. As discussed above, the delay you get using setTimeout() is always at least as long as the one you requested, and may be much longer, depending what else is happening in the browser at the time.
  2. The minimum amount of time you can delay in modern web browsers is about 4ms. If you specify a smaller delay than that, it is treated as if you had specified 4ms.
The reason for the 4ms limit is that there are a number of older, poorly-behaving web scripts out there (many of them games) that are designed to run as fast as they can with no delays at all if they can get away with it. These scripts still need to exit back to the Javascript run-time periodically, however, and some years ago you could specify a delay of zero for setTimeout() and actually give up control for a minimal amount of time. In operating systems, this brief release of control is termed a yield.

The problem with this yield technique is that it tends to suck all the juice out of laptop and mobile device batteries in short order. So the browser developers instituted minimum delays (originally on the order of 10ms) to keep users from blaming the browsers for their dead batteries. The web standards now require the minimum delay a browser must support to be no greater than 4ms, and as a result, all the major browsers now use 4ms as a minimum.
That standard 4ms minimum on delay time is not an issue for timing access to main memory, which averages 8.4ms. For loop access, though, which averages 0.84ms, the 4ms limit is a serious problem. The emulator often needs to delay for fractions of a millisecond, but there is no way setTimeout() will do that.

I had a similar problem, although for a different reason, when working on the B5500 emulator, so in 2013 went looking for a solution. It turns out that there is no finer-grained substitute for setTimeout(), but there are a couple of ways that you can give up control and have the run-time call a function you provide as soon as there is nothing else ahead of you in the queue, which is exactly what people were trying to do by passing a zero delay to setTimeout().

Browsers have a mechanism that can be used to pass messages asynchronously between windows, implemented using the postMessage() function for sending and the onMessage event for receiving. It also turns out that a window can send a message to itself, which is something I would never have thought of doing, but this is why we have Google Search -- to discover things we can't think up ourselves. Passing a message to yourself effectively produces a yield, and the event handler you supply for onMessage is simply the call-back function for that yield.

So having the setTimeout() mechanism that will delay for at least 4ms, and the postMessage/onMessage one that will delay for a hopefully very small, but unpredictable amount -- how do we generate a delay of 0.84ms?

The answer is that in individual cases we don't. Instead, we play the averages.

My setCallback() function has the same overall purpose as setTimeout() -- you call it passing a requested delay in milliseconds and a function to be called after that delay expires. If the requested delay is more than 4ms, setCallback() calls setTimeout(), which will always result in a delay that's at least a little longer than we asked for. If the delay is less than 4ms, setCallback() will call postMessage(), which should usually result in a delay that is shorter than we asked for. setCallback() passes its own call-back function to both of these mechanisms as a proxy, and when either the delay expires or the message event occurs, that proxy function measures the amount of time that actually elapsed. The proxy then executes the original call-back function passed to setCallback().

Subtracting actual elapsed time from the delay originally requested gives a deviation -- how too much ahead or behind schedule the delay completed. setCallback() keeps a running total of those positive and negative deviations, and on the next call to it, attempts to apply a portion of any accumulated deviation to the new requested delay. If the accumulated deviation is positive (actual delays are running longer than requested), it attempts to reduce the new requested delay. If the accumulated deviation is negative (actual delays are running shorter than requested), it attempts to increase the new requested delay. Then that adjusted delay is compared to 4ms and used to decide whether to call setTimeout() or postMessage(). In this way, the mechanism compensates for deviations in the individual delays, and the deviations tend to average out. Or that's the hope, anyway.

There is one more twist to this mechanism. I have been discussing delays in the context of timing memory access, but the emulator has other areas where it needs to introduce a delay, primarily in I/O. Because the different areas where the throttling mechanism is used have different delay lengths and patterns, I didn't want to lump them all together for the purpose of adjusting deviations. Thus the delays are divided into categories, and a separate cumulative deviation is maintained for each one. The cbCategory variable mentioned back in the discussion of readMemory() is a short string that identifies the category on a given setCallback() call. "MEMM" refers to the category for main memory accesses and "MEML" refers to the category for loop memory accesses. Each I/O device has its own category as well.

With this background on emulator timing and performance throttling, we can now discuss the problems the retro-205 emulator was having with its timing and what we did about it.


Analyzing and Correcting the Timing


When Tom pointed out that his programs were running faster than expected, there were three areas I thought could be prime suspects for the timing disparity. One was the way the emulator was accumulating internal word-time counts for instructions as they were executed. Another was the accuracy of the delays being computed for memory access and performance throttling. In the back of my mind, I was also wondering if the setCallback() mechanism, and especially the deviation adjustment portion of that mechanism, was working the way it needed to. It turned out all three were contributing to the problem.

I completely reviewed the way the emulator was accumulating word-times for each instruction, comparing that to some timing notes that Prof. Donald Knuth had kindly sent me from his files a couple of years ago. I found lots of discrepancies, so adjusted the word-times for many instructions, often upward by three or more. This slows the emulation of those individual instructions.

Next, the drum latency calculations appeared to be correct, but I realized in researching the timing of memory accesses in TM 4001, the 205 computer training manual, that I had not been accounting for the necessary time the processor needs to set up its internal logic for the access. This amounts to 2-3 word-times per access, and after some experiments I settled on the value 2. This is reflected in the B205Processor.memSetupTime field mentioned above, and also slows the speed of emulation somewhat.

Finally, I turned to the setCallback() mechanism. You might think that an algorithm for adjusting requested delays based on deviations accumulated from prior delays would be straightforward, but it hasn't been for me. I've completely rewritten that adjustment algorithm at least four or five times over the years, and still had my doubts about it.

I was finding it difficult to analyze the algorithm's behavior from inspection of the code alone, so I modeled it in a spreadsheet, and found that the adjustment mechanism was not as stable and well-behaved as I had thought. In fact, it was embarrassingly easy to come up with patterns of delay that would cause the adjustment mechanism to go completely unstable. So I sat down and really thought through the problem this time, came up with a completely new approach to calculating adjustments for requested delays, and modeled that until I had it working smoothly. This change did not actually correct the emulator timing per se, but it did eliminate the runaway instabilities and make the emulator timing more consistent over longer periods.

After making these changes, Tom and I independently reran the programs we had analyzed earlier to obtain theoretical run times. We found that the emulator was now generally operating within 2-3% of those theoretical times. For an emulator that operates internally at the instruction level rather than the clock level, I am quite gratified that we got that close. Flushed with this success, we declared victory over the problem of emulator timing accuracy.

Then Tom reminded me that there was this other program...


The Remaining Timing Problem


In the back of Prof. Knuth's copy of the 1956 Burroughs 205 Central Computer Handbook he donated to the Computer History Museum, there is a strip of paper that has been pasted onto the page. The strip has four lines that appear to have been printed by an IBM 407 tabulator, which read:
777 FAST DRUM ZERO. USES 4000 LOOP. ZEROES 6000-6019 AND 0000-3999 IN 1.1 SEC.
666608104D4024000002040260000064403500000224017100000260100000072402740810444016
6666081044403010000B601001000026006010000260020000007240350000012403500000754034
66660000204016000000039000000000010000810441000000002240241000026018010000260140
This is a 205 machine-language program in card-load format that, as the first line indicates, clears the 205 main memory in 1.1 seconds. It turns out that Prof. Knuth himself wrote this program ca. 1960.

I had discovered this piece of code early in my research into the 205 before beginning work on the emulator. Largely to better understand how the 205 worked, I manually disassembled the code and did a detailed analysis of it. Since this was a program for which we had a reported execution time, Tom suggested we should see how long it actually takes to run in the emulator with its corrected timing.

The program runs so quickly that it's difficult to time it manually, but the number I came up with using the Firefox browser was 3.2 seconds, almost three times longer than its reported run time. So much for the emulator timing being accurate to within 2-3%. That surprisingly slow performance launched an effort that eventually took me more time and thought than the original timing analysis and corrections had.

The first question to resolve was whether the reported 1.1 second run time was realistic. As a basis for comparison, I constructed a test that cleared the 4000 words of 205 main memory in a straightforward way -- by storing individual words in a loop. The following code illustrates the technique:
0050  0 0000 02 3999      STC  3999    clear A
0051  0 0000 72 0050      SB   0050    set B to 3999
0052  0 0000 30 0053      CUB  0052    block and execute in 7000 loop
0053  1 0000 02 0000      STC  0000    zero word a B+0
0054  0 0000 22 7053      DB   7053    decrement B and loop if B >= 0
0055  0 9999 08 9999      STOP 9999    halt
This code takes about 69 seconds in the current retro-205 emulator under Firefox. That is because accessing successive locations in main memory requires a full drum rotation of 16.8ms between accesses. Just multiplying 4000 times 0.0168 is 67.2 seconds, so this test generates a reasonable result. Reducing the time of this straightforward approach by a factor of more than 60 would be quite an accomplishment.

Prof. Knuth’s program works by taking advantage of the block-transfer instructions in the 205. These instructions transfer 20 words to or from main memory and one of the loops in one drum revolution plus the latency necessary to reach the starting address in main memory. The program initializes the 6000 loop to zeroes, then does five successive BF6 (Block From 6000 Loop to Main Memory) instructions to clear 100 words, iterating and decrementing the B register to step through all 4000 words in 40 increments. The really clever thing the program does is to modify the successive BF6 addresses by 40 instead of 20, allowing the fetch and setup of the next BF6 to occur during the intervening rotation of the drum, a technique known as interleaving.

To confirm the reported time, I took the disassembled code for the program, and once again referring to Prof. Knuth's timing notes, attempted to determine not only the amount of time each instruction should be expected to take, but also how the drum position changed with each instruction, and therefore the amount of drum latency and transfer time to expect for each memory access the program did. That analysis showed that one iteration clearing 100 words should take a total of 298 word-times, or 25.04ms. 40 such iterations should therefore take about 1002ms. Adding the time to initialize the program and then the 6000 loop to zero, the total time should be very close to 1.1 seconds. Thus, I had to accept that the reported time for the program was correct.

The next question was then, why did the emulator take three times longer than that? I added a temporary trace to the emulator to record real-world times taken by the individual instructions as they executed in that program. This showed that the revised setCallback() mechanism was doing a superb job of regulating the adjusted delays, but every couple of hundred milliseconds there is a large deviation averaging about 100ms. I suspect that this was caused by Firefox inserting its background tasks into the run-time queue and taking precedence over the emulator's execution. To compensate for this, the setCallback() mechanism was adjusting delays for emulated instructions downward, so that they were executing in about half their theoretical execution time. That is probably about as fast as the emulator could go. Performance under Google Chrome is better than under Firefox, but only by about 10%.

I finally concluded that this was a case where the emulator simply could not adjust the delays low enough, and consequently could not keep up with real-world time. The design of the performance throttling mechanism was predicated not only on introducing throttling delays during drum latency, but also on using those delays to absorb the overhead of browser background activities. Alas, Prof. Knuth's coding of the program so efficiently minimized drum latency that there isn't enough latency left to absorb all of that overhead. Thus the emulation of the program ran too slowly.

The final question was what to do about this situation. One possibility would be to use an approach to throttling performance similar to the one I use with the retro-B5500 and retro-220 emulators. Those systems had core memories, with memory access latencies less than one-thousandth that of the 205. Throttling in those emulators works by allowing the processor to run unthrottled for a time slice equivalent to several milliseconds of emulated time, then comparing the internal clock to real-world time and pausing execution for the difference to obtain the appropriate average performance.

This approach is eminently feasible for retro-205, and would probably require at most several days of work to implement. If employed in the 205 emulator, it would serve to aggregate multiple drum latencies and per-access throttling delays into one larger delay. That would result in considerably lower overhead within the emulator for memory accesses, and may free up sufficient time for the browser overhead functions to execute without impacting the overall accuracy of emulator performance. Then again, it may not.

In the end, Tom and I decided to do nothing, at least for now. The current throttling mechanism appears to provide quite accurate timing for all of the other processor-intensive programs we have tested, and was defeated only by one very small, very carefully-crafted program that took maximum advantage of drum latencies overlapping instruction execution to minimize total run time.

Besides, clearing memory is something you normally do only once before loading a program for execution. Having to wait an addition two seconds for that is not exactly a crisis.