DTACK GROUNDED, The Journal of Simple 68000/32081 Systems
Issue #40 - April 1985 - Copyright 1985 Digital Acoustics Inc.
It appears definite that AT&T's cheap UNIX/68010 machine, the model 7300 (code-named SAFARI) will be introduced on 25 March. Unfortunately, this issue should be pasted (actually waxed) up that morning and turned over to the printers after lunch. So we can't cover that model in this issue.
Yes, it's that time again. But we won't have any 'April Fool' stuff in this issue because our technician, Ray S., who first worked with us 'way back in 1962, threatened to quit after reading the first half of the first column on the front page of LAST year's April issue.
(Does anybody out there remember John Martellaro's and PEELINGS II's SCINTILLATOR spoof of two years back, which got John in even more trouble because LOTS of folks fell for it hook, line and sinker?)
Actually, our 'April Fool' item was in the LAST issue, when we reported that HALGOL had already been shipped. It hadn't. We meant well at the time when we wrote that it had... is there ANYBODY out there who has not been knocked on his or her butt by that doggone Philippine flu bug? That was a nasty one!
Although there has been six weeks between each of the last two newsletter mailings, keep in mind that the HALGOL documentation is (at least) equal to another newsletter, so if we factor that in we have been producing 28 pages every four weeks lately, or a page a day, seven days a week. What with the compressed print we use, that's the best we can do over the long haul.
(So Nils D.'s 700 page book which EXACTLY describes HALGOL will be just a tad late! Sorry, Nils.)
------------------------------------------------------- Stride Faire '85 highlites ........................ p.3 More Amateur 68000 hackers ........................ p.5 How Fast/Slow is C ? .............................. p.7 CP/M-68K ......................................... p.10 FNE's own minicomputer (really!) ................. p.10 The Keronix fire ................................. p.12 Bit-mapped graphics .............................. p.13 Windows .......................................... p.15 Unix gathers momentum ? .......................... p.18 A tale of two micros ............................. p.21 We Get Mail ...................................... p.22 HALGOL report .................................... p.27 -------------------------------------------------------
An old XMAS joke prompted us to seriously suggest to a major customer that they might consider foot-assisted input on their CADD workstations. We pointed out that feet provided additional input on certain musical instruments including the organ, piano and some types of drums. In particular, we suggested that foot-assisted input might be superior to mice and in some cases superior to bit-pad inputs. We made this suggestion last December.
Well, an outfit now makes a "foot-mouse". This is being advertised in BYTE (see March p.44) and in the various PC-related publications. Perhaps our suggestion was not so ridiculous after all? (We don't know how that foot-mouse works yet - do you slide it around or what?)
Meanwhile, Jerry Pournelle took time off from lambasting the Mack in March BYTE to tell about a new keyboard which features a track-ball. Now, we personally greatly favor a keyboard-mounted trackball over a mouse, for reasons which all touch typists will understand. (If you are a hunt-and-pecker, go away.)
Basically, the track-ball, when keyboard-mounted, will always be close to and also have an absolutely fixed position with respect to the home keys. With only a little practice, it should be possible to move one's hand back and forth between the home row and the trackball without taking one's eyes off the CRT - something which will never be practical with a mouse. We also believe that finer control ('vernier') is possible with the trackball than with a mouse. Besides, we are a messy-desk type [candidate for understatement of the year] and we will NEVER have the necessary cleared area for the mouse. There are no such problems with the trackball. With the footmouse? We dunno.
(Continued on page 17)
We somehow feel that Tandy has jumped on its horse and ridden off in all directions. One of the magazines we subscribe to is 80 MICRO, which bills itself as "the magazine for TRS-80 users". Well, back in the old days, TRS-80s sold well and were Tandy's principal product, computer-wise. Here are the varieties of Tandy computers now:
That's seven categories of computers, and the owner of one category of computer most certainly will not want to read about the other 6 categories. And although 80 MICRO's subtitle claims it services category A, those computers are not selling so well these days. In fact, Tandy's next big offering into that market is supposed to be an advanced TRS-Color machine with BASIC-09 and OS-09 in ROM!
If you think those last three machines, the 2000, 1200 and 1000 are similar, forget it. They aren't - we own a 2000, remember? The 1200 is so IBM-compatible that it is unrelated to other Tandy products at all. Since that computer was not designed or built by Tandy that's not surprising. We assume most of you know that Tandy bought those 1200's out of Jugi Tandon's warehouses.
As a result of this product diffusion Tandy-related computer magazines are having a tough time, and Tandy's sales aren't exactly sparkling.
(The evident fate of the 2000 to be one of the personal computer industry's "almost-never-rans" is sad. If the 2000 had been introduced with the logo "IBM" in striped blue letters, it would have been a runaway smash-success. The 2000 features a CPU as powerful as the 6MHz 286 used in the AT, but with (apparently) fewer bugs, ample floppy disk capacity (720K per disk) compared to other contemporary systems, MS-DOS compatibility, 4 times better color graphics resolution than the IBM PC, a good keyboard, a writeable character generator... lots of good stuff. And remember, it was a year ahead of the AT.)
Commodore is saddling up its steed in an apparent attempt to gallop off in all directions itself! Rumors and announcements indicate machines based on the 8088, Z8000 and 68000 in its near future as well as the old
standby 6502 (and 6502 derivatives) PLUS, at this late date yet, the Z-80. The Commodore-related magazines such as the Transactor (Canada) and Midnite are going to be faced with the Tandy dilemma soon instead of writing exclusively about the C-64.
Reader Terry P. recently attended a C-64 user's group meeting in the Bay area and found both attendance and enthusiasm down compared to a year ago. This is not especially surprising; SALES of the C-64 are down too. It appears that the C-64, 3 million copies strong, is about to become history. Well, what do you expect from a machine whose disk drive transfers a frabjuous 320 bytes per second?
It remains to be seen whether the forthcoming Commodore 128K machine, which will include a 6502-clone CPU as well as a Z-80 (for CP/M capability) will restore Commodore's fortunes. We don't think so. We think the machine might sell several hundred thousand copies, which constitutes failure by Commodore standards. And if Commodore brings out a PC-clone based on the old Canadian Bytec design it will fail worse than miserably, for several reasons.
InfoWorld's columnist John Gantz asserted in a recent column (as an aside) that Commodore dumped Tramiel at the right time. You would almost think Gantz is unable to read financial reports (see #39, p.21)!
It is almost embarrassing for a newsletter writer like us to find ourselves in 100% agreement with a mass-media publication over something which is really a guess. But that is where we find ourselves - and Ziff-Davis's PC WEEK - with respect to the REAL reason for the slow and late delivery of IBM's AT. (In case anybody from PC WEEK is reading this, we wrote about completed but slow-selling inventory - in that case, the Apple III - 'way back in issue #4 of this rag (pp9-10), dated Nov '81. In fact, our writeup on that slow-moving inventory was apparently widely read in the Apple legal department via illegal photocopy, as we reported in issue #5, p4.)
Basically, we (us and PC WEEK) believe that excess unsold PC/XT inventory is what is holding up shipments of the Enhanced (hard disk) version of the AT. The reason IBM went ahead and introduced the AT last fall with all that unsold PC/XT inventory was that the simpletons running IBM's personal computer marketing really believed that folks were going to follow IBM's directive to purchase and use the AT exclusively as a multi-user system, so that AT sales would not cannibalize XT sales. Hah!
So the AT was pulled off the market for about nine months, which is obviously how long IBM thinks it will take to sell all those XTs. What happened was that too many folks refused to buy the XT, and instead wanted on retailer's waiting lists to get their genuine AT. IBM's marketers apparently did not realize that retailers cannot pay the rent with waiting lists, and what happened next caught them by surprise:
(IBM had used parts shortages of the hard disk as an excuse for pulling the Enhanced version of the AT from the market, but had told retailers to go ahead and order all the floppy-disk versions of the AT they wanted.) The retailers, wanting to place food on the table for their children, quickly learned how to upgrade the floppy disk version of the AT to an Enhanced-AT compatible and started to sell them as fast as they could make them.
IBM, shocked and dismayed by the undesirable turn of events, has suddenly developed a shortage of the floppy disk version of the AT that is even worse than the shortage of the Enhanced-AT. Now, will everybody who thinks excess XT inventory is not what is driving the shortage of both kinds of ATs please put on a dunce hat and go stand in a corner?
(This is being typed on 2 March.) It has doubtless reached your attention by now that IBM is about to introduce a successor to its PC. Based on purely logical considerations, we predicted some time back that the CPU would be the Intel 186. PC WEEK long ago hinted, and more recently flatly asserted, that the CPU would be the 286. Not everybody agrees. For instance, EET had a Feb 18 feature story on the upcoming PC II and suggested that the CPU might be any of a variety of selections - even the 8088 or 188!
Now, look: IBM has done some incredibly stupid things - remember the original keyboard and the original prices hung on the PCjr? - but not even IBM is stupid enough to design an 8088 into the PC II! What is truly fascinating is the thought that IBM might use the 80188, which we believed would have powered the PCjr had Intel been able to produce them at the time, in the PC II. You see, the same CPU which would have made lots of sense in the PCjr over a year ago makes no sense at all in a PC replacement in 1985. But who ever said IBM was sensible? (The 80188 is the 8-bit bus version of the 186.)
For the record, here is why we think using the 186 would be sensible: the chip requires fewer support and 'glue' chips than the 8086/8 or the 80286 and hence is cheaper and simpler to manufacture. And IBM wants to be the low-cost producer and also to have an efficient,
automated production line. The 186 fits best and does not incur a performance penalty, considering 8MHz parts are readily available. PC WEEK's choice of the 286 may well be correct but is NOT sensible. The 286 requires more support circuitry than the 186, is more expensive, and is NOT readily available in the kinds of quantities needed for a PC-successor. Besides that, the 286 is rumored to still be heavily bug-ridden, which if true, IBM would surely be cognizant of.
Most of you know that the Intel math coprocessors (e.g. the 8087) are dedicated to a specific Intel CPU because of a need to track the instruction prefetch-queue in the CPU. Thus we have the 8086/8087 and the 80286/80287. Although Intel announced an 80187 math chip to work with the 80186 CPU, they decided not to produce it. The 80186 remains without direct math chip support. A large LSI chip is needed to couple the 8087 to the 80186, and this chip (an Intel source tells us) costs about $15 in large quantities. That $15 parts cost winds up as $75 in the retail computer price tag using IBM's markups and $100 using H.P.'s markups. One thing no computer needs these days is an extra $75 or $100 on its price tag. Therefore, the 80186 is CONSIDERABLY less cost-effective in an environment that needs a math chip - and remember, the entire purpose in life of the 80186 was to be cost-effective!
Question: Could Intel's decision to drop the 80187 math chip be related to a decision by its parent company to bypass the 80186 as an engine for its personal computer line?
"The glowing recommendation of Jerry Pournelle, the chance to meet Niklaus Wirth, the off-chance of finding someone interested in ASSEM68K, and plain curiosity about what's going on with the Stride Micro crowd got me to Reno on February 8-10th. What I found (as J.P. said it would be) was a small gathering, by computer show standards, of perhaps 500 attendees and a dozen or two exhibitors.
"The exhibitors were of three categories: 1) Vendors of software products for the Sage/Stride systems - mostly P-system stuff; 2) Vendors of non-Stride hardware add-ons for Sage/Stride computers; and 3) Vendor(s?) of non-Stride hardware/software. This last category contained (unless I missed some) only Modula Corporation, which was displaying the Lilith and their Modula-2 running on the Mackintosh. They were also advertising, but not displaying, a Modula-2 for the IBM PC. (I guess they didn't have the [Anglo-Saxon colloquial term for male reproductive organs deleted]
to bring an IBM PC to the show - they probably thought bringing the Mack was gutsy!)
"Continuing backwards through my list, several of the category 2 exhibitors had graphics attachments to the Strides - the majority seeming to favor the Thompson-CSF chip, which I'd not before heard of. The most intriguing product to me in this category was an add-on box for the Stride into which one can plug IBM PC boards (!), including the new AT-style graphics adaptor. However, much like the offerings of another hardware vendor I could name, these had precious little software support - the flashiest showing being that of Stride itself. They showed-off their new graphics board with an animated chessboard, viewed in perspective, wherein the queen perpetually dashed about on a triangular route (that ignored the chess rules for queen moves).
"Far and away the most important exhibitor in category 1 (for us non-Stride 68K folks at least) was Sahara Software, Ltd. They have an operating system called "Mirage," written in 68000 assembler, which will run in 64K (barely - the code is only 40K). Actually, their recommended minimum hardware configuration is 128K with at least one floppy drive. Mirage, it turns out, has been around since 1981 (and is, therefore, well debugged) and has been "ported" to several 68000 systems f or "Fortune 500" type customers in the U.K. It is billed as a "multi-user" operating system which means it is perforce multitasking - for us one-man, one-or-more CPU guys! It also has one level of subdirectory in its file system, so it can handle hard or high-density floppy disks conveniently. Sahara also has BASIC, Pascal, and FORTRAN compilers, as well as an APL interpreter - which comes in two flavors: regular (the usual hieroglyphs) and "keyword" (all the funny symbols are replaced by English words - the way Ken Iverson should've done it the first time!). Of course, all these goodies come at a price: several hundred dollars EACH for the "multi-user" Stride versions. However, Sahara recognizes that we "single-user" customers are somewhat more impecunious than their other buyers. For example, the APL interpreter is available for the Sinclair QL at 100 pounds (about $106) versus $500 for the Stride 420! Therefore, I'm hopeful some agreement between Sahara and Digital Acoustics can be worked out that won't break anybody's bank! (C'mon FNE! THIS is the 68000 assembler-based DOS you've been crying for!)
"Most of the talks contained little of hacker interest - being directed at Stride dealers and the Modula-2/Pascal groupies. (E.g. a panelist who shall be nameless [not N. Wirth] remarked at one point that the "last excuse for assembler programming disappeared with the advent of the optimizing FORTRAN compiler." I determined in private conversation with him later that
he LITERALLY BELIEVES that statement!) Two talks that were of interest were those by Jack Brown of Motorola and by Pete Wilson (no relation to the California politician) of Inmos. Brown said they'd shipped 3-5 thousand 68020's since last June; and that they expected to ship about 75,000 in 1985. Wilson, who was sub-ing for Iann Barron, gave a very entertaining talk (although his anti-American barbs - he's British - were somewhat extreme) about Occam and the transputer. He CLAIMED to have a working T424 32-bit transputer in his laboratory.
"Th-th-that's all, folks!"
Geez! Here we are being pilloried for not supporting yet ANOTHER operating system, and this time using our own ink! Has anybody, like several hundred of you REAL recently, noticed that we are in fact working HARD on a very fast operating system and language? Which (for now) has a VERY low price tag? And which everybody - not just DTACK customers - can examine in source form to their hearts' content?
We assume all of you saw the editorial on page 6 which asserts that 1985 will be the year of the 68000. It looks to us like the year of 68000 ANNOUNCEMENTS! The Sinclair QL is just beginning to ship in (some) quantity in Britain and probably will not appear in this country until early '86. Commodore's Amiga may not be real - a working model has never been shown in public without a minicomputer sitting behind a door to make the Amiga work, and (as we reported before) the most recent 'Amiga presentation' was some SLIDES, we kid you not, which were purportedly prepared on an Amiga. Atari's Jackintosh works and Atari has already produced a short (about 100 unit) pre-production run but the question is whether Atari has the cash to produce the Jackintosh in quantity for the mass market. The AT&T 7300, designed and manufactured by Convergent Technology, is coming along sometime soon but how much is it going to cost you to turn on the power switch - and how long for non-UNIX types to learn to run?
The Sinclair QL is in trouble in Great Britain. Sinclair has stopped taking delivery of the QL from its suppliers for about a month. Remember, Sinclair does not maintain a production facility, but contracts the work out. And just a few months back, it seemed the problem was that Sinclair could not DELIVER the QL!
We still think the problem is that the QL is an absolutely ideal 32-bit rock-shooting toy for the U.S. market while Britons - and Sir Clive himself - keep trying to turn that tiny-tape-cartridge-based $500 toy
into a business machine. The QL, not the Atari machine, is THE trash-68 which the marketplace, U.S. division, has long been waiting for. If only Sir Clive had taken that month's production and shipped it to the U.S... Hell, Digital Acoustics would have bought five. At least! (The QL supports an 80 column upper and lower case display and is, potentially, an ideal mass-market vehicle for HALGOL, which the Mack and the Atari are not, at least for now.) Incidentally, we understand the QL is for sale now in Canada via mail order.
Meanwhile, Mackintosh sales have slowed drastically. It is reported that only 15,000 Macks were sold in Jan. What is known for a fact is that Apple has scheduled a one-week shut-down of all four of its plants, including the automated plant in Fremont and their off-shore facility in Singapore. They have also given their dealers a $300 discount on a Mack package including a second drive and the printer, AND - get this - they are paying the individual retail clerk who makes the sale another $100! Obviously, Scully was watching the successful December PCjr promotion by IBM. It is too soon to see what that $400 cut will do to Mack's sales, if anything.
Most of you have noticed by now that folks either take to the Mack or they don't. We happen to be among the folks who don't, to the dismay (and occasional rage) of some of our readers who are of the opposite persuasion. As we have explained before, we have chosen to moderate our criticism of Mack for fear that our motive might be misunderstood. But now that doggone Jerry Pournelle has, it would seem, read our mind and then violated our mental copyright by publishing the Mack-related material found therein in his March Chaos Manor column! (Perhaps Jim Strasma can arrange for the FBI to arrest J.P. on copyright charges?) [Inside joke for readers of MIDNITE, #22, p.8.]
Be sure to read Jerry's entire column; he changes the subject away from Mack and then comes back and devastates (by implication) Mack's FINDER module, something which is obviously busted and badly needs fixing. John Dvorak recently suggested that Apple is preparing new, larger ROMs for Mack. Since FINDER is in ROM, that would provide a good opportunity to fix it.
And it is hilarious that unexpandable Mack has to have a CPU-ectomy to operate a fast disk drive (e.g. the Hyperdrive)! We thought the serial port was supposed to do that job? Boy, Apple was sure smart to leave out a second disk drive in the Mack case! Is there anybody who has owned a Mack for more than five minutes who has not purchased a second disk drive?
Yet another amateur 68000 group has surfaced, as reported in BYTE (p.356). This one is called "The Hacker's Mack" or "An Open Architecture Implementation of the Mackintosh Paradigm". Victor Frank's 68796 newsletter, issue #3, carried much more information about this nascent group. It is headed by Lee Felsenstein, whose address is:
2600 10th St.
Berkeley CA 94710
Space is being provided on Gordon French's bulletin board: keyword SWIFT, 1200 baud, (408) 736-6181.
We have serious reservations about the goals of this group. They seem to be dedicated to perpetuating many of the worst features of the Mack. It is interesting to note that Lee Felsenstein has an 8080 background - he designed the Processor Technology SOL and the Osborne 1. Like lots of other folks, Lee has made the very serious error that what a "better Mack" needs is bit-mapped graphics just like the real Mack only with more resolution and multiple bit planes for color. WRONG! We will discuss this in more detail, perhaps elsewhere in this issue.
J.P. quotes Mike Lehman on UNIX: "A professional developer's power tool and a nightmare for casual users". Seems the most sensible single-sentence view of UNIX we've run into in a long time.
We have long believed that Pick was, and is, a much better small-business operating system than UNIX (which is much better than Pick for program development). Be sure to see the letters in Mar BYTE beginning on p.14 about this system. As we told you before, the big problem Dick Pick has had in recent years is too many customers!
The largest land-holding in our area is Irvine Ranch. Irvine Ranch could make a nice living selling off small pieces over the years but they prefer to retain ownership. Over 25 years ago, they leased a bunch of lots in desirable locations for folks to build their homes on. The leases had terms of 99 years but called for the lease rate to be renegotiated every 25 years, with the new rate to be based on an independent appraisal of the value of the land, or lot. Originally mostly desert, when the 25 year reappraisal came due the area was heavily built up, which called for MASSIVE increases in lease payments!
(We will not pass any judgements on the purported intelligence of persons who would build a house on land they did not own.)
Naturally all the homeowners screamed loudly, but the soundness of Irvine's legal position proved unchallengeable. Enter one very small divorcee-troublemaker: this little lady, whose name we forget, organized all those homeowners into an extralegal sort of boycott of Irvine. It's like this: you probably know that a county permit has to be obtained before any construction is started. This generally involves a public hearing because almost any construction requires some form of variance. The hearing officer is much like a judge, but is legally required to hear anyone who wishes to speak. But you probably did NOT know that the same procedure must be followed even if the 'construction' involves merely 'cutting' a sidewalk to install a driveway.
Most such dinky permits are routinely approved without anyone showing up to speak about them. Being by far the largest developer in Orange County, hundreds of such dinky permits (and some not so dinky) need to be approved each week for ongoing Irvine development. Well, this feisty divorcee organized groups of 25 to 50 people to show up at ANY and EVERY public hearing involving the Irvine Ranch. These folks would line up to speak against any and every Irvine-related proposal. What happened was, Irvine development pretty much ground to a halt. Irvine Ranch management was GREATLY irked at that feisty divorcee!
The Irvine folks finally capitulated and agreed to renew the leases for much smaller payments than they were contractually entitled to. End of Nth retelling of David vs. Goliath. We are not certain that the good guys won this time...
During this extralegal protest, there were MANY details to attend to. Remember, this was a MASSIVE protest involving MANY people. The feisty divorcee was advised to seek help from a computer person to organize all those details. She finally was introduced to a computer professional who had access to computers at work and, perhaps, at home. The computer professional was of the male persuasion and they had to spend a lot of time together... one thing presumably led to another because they got married.
All of the above was reported extensively in the Orange County edition of the Los Angeles Times. Please understand that this lady was very, very well known in Orange County for a while. We are almost ashamed that we do not remember her former name. But we have no trouble at all remembering her NEW name:
It's "Mrs. Dick Pick!"
is the title of an item from the 'READER'S FORUM' in Mar 15 DATAMATION. This item is by Herb Grosch, who is a long-time gadfly of the mainframe computer industry. Herb was very active about 15 years back; we think he is semi-retired now. Or maybe there is more stuff going on in Mies, Switzerland than we know about. Anyhow, here are excerpts from his 'OOGAH' missive:
"An oogah idea is one that generates enthusiasm, then publicity, then venture capital, and in its most virulent stage, sweeps its entire trade with unreasoning and unsupportable vigor. As results and profits fail to appear, the financial community wolves begin to harry the bewildered stragglers, and the professionals and the symposium organizers peel off to promote the next excitement.
"...if the trade journals do a better job, the man in the street (assuming one is left since everybody seems to have scrambled onto the computer bandwagon!) will also be better protected. What we need is critical reporting much earlier, as new ideas surface and begin to heat up. We need more curmudgeonry and less pressagentry...
"In the end, we indeed get criticism, but it is often two or three years too late to protect our community from wasting its collective time and wealth on impossibilities. Please, will [DATAMATION's editors] try to lead us out of the valley of oogah? If you daren't say something is crap, at least giggle a little!"
No, Herb was NOT talking about mass-market UNIX. He was talking about artificial intelligence on the current generation(s) of computers and the so-called Fifth Generation. But, gee, his comments sure seem to FIT mass-market UNIX!
The other evening Bob P. momentarily startled us by asserting that fewer personal computers would be sold in 1985 than in 1984. On reflection, that is not such a wild assertion.
For instance, Mar 11 Electronics Weakly, p.44, reports that (according to market research firm Future Computing) fewer computers were sold during the last quarter of 1984 - that's the XMAS selling season - than during the last quarter of 1983. 500,000 less! But that the dollar volume was $1.8 billion retail, up from $1.2 billion in 1983. That means folks were buying Apple IIes and IICs instead of C-64s, KayPros instead of Timex electronic doorstops, and IBM PCs and PCjrs instead of TI99/4As.
It has become obvious to us over the past several years that applications programs, or (worse) other programming languages which are written in 'C' have horrendously poor performance. By 'horrendously poor', we mean in comparison to some obviously similar programs written in assembler. To refresh one's memory, let us cite four obvious, and fairly recent, examples:
During the period when we have been keeping a close watch on the microcomputer scene (about 5 years now) we have never once observed an example of a 'C'-based application program (or other programming language) which was even remotely in the same performance ballpark as an equivalent assembly-based program. (The same thing can be said of OTHER high-level languages such as PASCAL.) While we have obviously not been able to verify that EVERY such application or language written in 'C' is horrendously slow, we believe we have a sufficiently large statistical sample to make two observations:
Let us emphasize that we reached the above conclusions from real-world observations, not from theoretical considerations.
However, the first of the two conclusions above is hotly disputed by a number of persons, including some of you readers and also including lots of folk in the academic community who have never heard of us. See, for instance, the comment on optimizing compilers by Terry Peterson in his STRIDE FAIRE report elsewhere in this issue. Or, the following excerpt from a recent letter from a reader:
"It is known among all experts that the reason FNE cannot find a compiler that has a 50% overhead over assembly language is that such compilers have long ago been replaced by compilers that have NO EXTRA OVERHEAD AT ALL!! What? you don't believe me? I quote from Operating System Concepts by Peterson and Silberschatz, p.350:'We now have optimizing compilers that generate code that is at least as good as hand-written assembly language programs.' Since this book is used in the teaching of operating systems to graduate students in at least two major universities it must be true, yes? I mean, after all, what is FNE's minor opinion compared to all those master's and doctorate candidates in CS? YOU WILL BELIEVE THIS! YOU WILL ALSO GO OUT AND BUY A PCjr!" Bill H., West Linn OR
(Bill, certain aspects of your prose style seem vaguely familiar - FNE)
One reader who has taken a significant amount of time to educate us on the subject of the efficiency of 'C' compilers is Daniel L. (see #39 p.14 col.2), who went so far as to assert that the DRI 'C' compiler has only a 1.25-1 high-level overhead. Our comments on that led to further correspondence, followed by a phone call to clarify some points. We were trying to reconcile that 1.25-1 with the glacial slowness of certain DRI languages written in 'C', including that ill-fated "Personal BASIC" which is best benchmarked using an almanac.
To our surprise, Daniel asserted something like "Yes, many applications written in 'C' are very slow even though the language 'C' - and DRI's compiler - is VERY efficient!" Daniel then proceeded to offer technical reasons why this is true. We said, HOLD IT! You just GOTTA write us a letter about this! Here, then, is Daniel's letter on this subject (and on CP/M-68K, which will be discussed separately later):
"I have been observing the current debate on high-level languages vs. assembler for quite some time now. I have observed that packages such as WordStar 2000 are reported as having extremely poor performance, as being excessive in size, and so on. The next thing I read is generally a diatribe starting "And it's all because this was written in 'C'!" BULLPUCKEY!! Of all the idiot things I have heard in this industry, this one takes the cake. Pay attention now: First, C is a VERY simple low-to-medium level language, so simple that even a feeble-minded recursive descent compiler can generate remarkably good code. The DRI C compiler that comes with CP/M-68K seems to be much better than a lot of them out there, with about a 1.25-1 "HLL overhead",
although with modern techniques, MUCH better is possible. Second, the simplicity of C comes in major part from that fact that it has no built-in functions, I/O primitives, etc., as many other languages do; all it has are facilities for dealing with the machine in the machine's own terms (pointers, chars, words, registers, etc.). Therefore, any I/O, etc. must be supplied from outside the language, from externally defined libraries. We put these two statements together, and a remarkable conclusion can be drawn: If a program written in C is inefficient, EITHER THE ALGORITHMS USED ARE POTTY OR THERE IS SOMETHING WRONG WITH THE RUNTIME LIBRARIES. Let us be charitable and assume that the authors would not knowingly use bad algorithms (although I have something to say about that, also) and go on to inspect the only thing left: the runtime libraries.
"In looking at the libraries supplied for three CP/M-80 C packages, and the CP/M-68K library, and seven C libraries for the (yuck!) 8086 crowd, I have observed a common element. EVERY ONE OF THESE PACKAGES TRIES TO MAKE ITS NATIVE OPERATING SYSTEM LOOK LIKE WHAT THE IMPLEMENTORS PERCEIVE AS UNIX! The farther the native operating system is from having the features of UNIX, the harder the implementors try to make it UNIX, with the result that every program carries the overhead of a UNIX imitation.
"Look at CP/M-68K, for example. If there was ever a system farther from the conceptions of UNIX than CP/M (for the personal computer market) I haven't heard of it. Yet the DRI C library will put ALL the code necessary for I/O redirection via the command line (a UNIX feature), buffered stream files (a UNIX feature), etc., ad nauseum, into every program I write. I've studied the code in the DRI library, and have come to the interesting conclusion that just to get a character on the screen using any of the library functions increases the instruction path length by about two orders of magnitude over using purely CP/M related calls, simply because of all the "we gotta look like UNIX" code. The famous "printf("HELLO, world\n");" compiles to 34 bytes, but when all that UNIX garbage is linked in to get an executable codefile, it has expanded to over 15,800 bytes, and for each character to print we go through all the garbage.
"I would dearly love to know who started this, to me completely idiotic, belief that "if is's C, it's gotta be UNIX". I'd be gentle with him, I promise; the poor person is obviously not all here. He probably spends a lot of time editing newsletters. I've heard that's good therapy...
"Anyway, it's this "C = UNIX" attitude that is the cause of the inefficiency of a lot of programs, and not the low-to-medium level language C. My solution to the
problem is to rewrite the library to be used with CP/M-68K, not UNIX. Funny, when I did this, programs shrank and sped up rather dramatically. Equally funny, the programs are portable to any CP/M-like system. You can't take them to UNIX, but I DON'T RUN UNIX! Which brings up the question of portability. I don't particularly care if programs I write for CP/M-68K aren't portable to UNIX or MVS/XA or what-have-you. Generally, different machines/systems/environments have different needs, so a really useful program in one environment is useless in another, or worse, actually detrimental. So it's enough for me to ensure that my programs are portable between systems sufficiently similar to have the same needs. I fail to understand the current trend of "portability through mediocrity".
"The next thing I want to say relates to CP/M-68K itself, and to the above charitable assumption about someone knowingly using bad algorithms. I have read that a great number of people are saying things like CP/M-68K is slow, the development tools are slow, etc. My first question is: Slow relative to what? And under what circumstances? I first observe that a lot of the newer machines are being designed with 5 1/4 inch drives; from my experience with mini-floppies, I'd say there's a major source of speed reduction, right there. My personal observations on my own 68010-based system (8 inch drives, and a meg of memory) is that CP/M-68K is, while not instantaneous, quite a bit faster than many other systems I've seen. The thing that blows CP/M-68K out of the water as a system is the abovementioned stupidity about algorithms. Here we have a processor with a gigantic linear address space, an incredibly efficient instruction set, and a very powerful systems language for generating good code. AND NONE OF THE PROGRAMMING TOOLS TAKE ADVANTAGE OF ANY OF IT! We are supplied with a line editor that won't hold 22K in a MEGABYTE of memory, assemblers and linkers that are totally disk to disk (as nearly as I can tell, even the symbol table is maintained on disk, f'r chrissakes!), therefore I/O bound, and SLOW, a file transfer program that only buffers 16K at a time, and a compiler whose libraries are optimized for "compatibility" (Lord, I'm beginning to hate that word!) with a DIFFERENT operating system!
"It is fairly clear that CP/M-68K was an afterthought on the part of DRI, whose efforts are concentrated on the 8086 and Itty-Bitty-Monstrosities compatibility (there's that word again!), but there is no excuse for this kind of nonsense (When I use my virtual disk as the work drive, though, things speed up considerably, the limiting factor being the size of the programs being loaded and the programs being compiled). This is what I meant by using stupid algorithms; the system has calls for determining available memory, so why not use them and keep as much as possible in core? Why not autoload the passes of the C compiler, using the built-
in program load function, instead of having to use the primitive batch facility? Why not...? You get the picture. The funny thing is that ALL of these features are available on CP/M-80!"
Daniel R. Lunsford
4735 Marconi Ave #22
Carmichael CA 95608
(We are printing Dan's address with his permission, so that you may write to him directly if you wish.) By the way, when we asked Dan how one could achieve MUCH better performance than 1.25-1, he asserted that modern compiler techniques permitted 1-1! That agrees with the book cited earlier by Bill H. Hmmm...
We believe that the question of the efficiency of a high level language, especially one which is supposedly suitable for use in writing operating systems and/or other high level languages, is an important one. We also believe that if any high level language can produce, as a finished product, something which is only between 0% and 25% slower than the same product as developed by a skilled, experienced assembly-language programmer THEN IT IS A GOOD IDEA TO USE THAT HLL! (Barring the occasional real-time application which cannot tolerate even a 25% performance degradation.)
Until now, we had not considered that the question of the HLL overhead of a language might be different than that observed in finished products such as WordStar 2000 or (ugh!) DRI's Personal BASIC. Daniel L. has just produced a strong argument that the horrendously slow performance of observed finished products written in C is not due to the C language itself. This immediately raises the question, "If finished products produced using C invariably have abysmal performance, as in benchmarking with an almanac, why should the end-user give a fig whether or not the language itself is at fault? Is not the abysmal performance itself to be avoided?" We believe that Pournelle, in his BYTE USER'S COLUMN, once observed that he could not evaluate programming languages, just IMPLEMENTATIONS of programming languages!
If Daniel L.'s argument has merit, is it not appropriate for the writers of finished products to get rid of those high-overhead I/O routines? And for BUYERS of finished products to avoid finished products written in C until that is accomplished?
Our first reaction after reading Dan's letter was that the producers of those finished products would not be so stupid as to kill the performance of their products with highly inefficient I/O. On reflection, we have changed our mind. The producers, essentially 100% of
them, COULD well be so stupid. Remember, the folks working with C are indeed disciples of the great god TRANSPORTABILITY. And, after all, those finished packages were probably developed in a UNIX environment! You will remember that UNIX machines are great for developing software - and anybody who has followed UNIX knows that all that software being developed must be sold elsewhere because it sure as hell is not being sold to the UNIX marketplace! And remember, a main feature of UNIX is that all of the utilities - such as I/O routines - are already written, so that the software developer does not have to constantly reinvent the wheel. Given that philosophy, the problems cited by Dan automatically follow.
On the other hand, we believe that Dan has only part of the picture. For example, the horrid "performance" of the AT&T 3B2/300's floating point package has been thoroughly documented and confirmed. It does not seem possible to blame that on I/O routines since to the best of our knowledge floating point does not use I/O!
Another point: when someone asserts that a compiler "produces code just as good as assembler", just exactly what does that mean? Does it mean that the compiler produces code which is just as efficient as that produced by a graduate student who has just been handed an Intel 8086 programming manual and who has never written assembly code before in his life? Or is the claim being made that the compiler produces code as good as the guy who did most of the assembly coding of Lotus' 1-2-3 (NOT Mitch Kapor, as it turns out)?
If the compiler generates code modules which set up parameters on a stack, call a subroutine, and leave the results on the same stack, is the efficiency of the compiler judged against assembly code WHICH IS CONSTRAINED TO USE THE SAME TECHNIQUES or against assembly code written using whatever technique works best, in the opinion of the assembler?
If We can take an inexperienced graduate student and make him use the same ground rules as those used by the compiler then we can most certainly produce code whose high-level overhead is vanishingly small.
We believe that this question IS important. We know of a company much larger than Digital Acoustics, but which is small enough that upper management cares about such issues, that is performing an in-depth study of the question of HLL efficiency right now.
The MIS departments of large companies are frothing at the mouth to regain control of all those personal computers out there. Every personal computer which is not controlled by the MIS department diminishes the
importance of the department and its honcho - horrors! We were therefore interested to read, in DATAMATION, about how individual users select PCs vs. how the MIS departments do the same selection. Individuals judge 'PERFORMANCE' to be the third most important factor, while 'PERFORMANCE' does not even appear on the MIS list of factors to be judged! I YI YI!
The underlying theme of this rag, of course, is just how much performance an individual can get out of hardware which that individual pays for out of his or her own pocket and which the individual expects to use himself or herself. And it certainly does not make sense - to us - to select the best available and affordable CPU (the 680X0 series) and then cripple it with unnecessary HLL overhead.
Perhaps it makes sense to cripple the performance of a company computer for the sake of convenience, and it is undeniable that HLLs are convenient. But (please excuse the redundancy) this rag is NOT about company computers! (So will certain readers stop telling us how good virtual memory is when run on big, expensive company computers? Please?)
Until recently, we had received about three letters and one personal visit (by the owner of a DIMENSION $5000 68000 machine) discussing CP/M-68K, and all of those early contacts reported that CP/M-68K was slow and took up nearly 100K memory space. We have subsequently received about three letters, including Daniel L.'s, which assert that CP/M-68K is NOT slow and that it consumes about 22K memory space. Obviously, there is a difference of opinion here and we were premature in forming our opinion. We apologize to you readers for our carelessness and to DRI for, perhaps, incorrectly maligning their product.
Let us make it clear that we are not switching sides, but rather that we now take a neutral stance. We would like to hear more from you readers on this subject, and suggest the following guidelines:
"FAST" and "SLOW" are relative concepts. If you are going to discuss the speed of CP/M-68K, please do so in comparison to a contemporary, competing operating system (preferably MS-DOS) using comparable hardware. (One reader wrote that his CP/M-68K was reasonably fast using a one-megabyte RAM disk!)
'C' fans every where will be overjoyed to learn that DRI now makes their glacially-slow Personal BASIC available for the 68000. You can buy a $10,000 and up Motorola VME-10 to crawl that language on.
Back in late '72 or early '73 your FNE made the second biggest mistake of his life: he bought a minicomputer. At home, as a private citizen. It was a DCC 116, which was to the Data General Nova THEN as a typical PC clone is to IBM's PC NOW. DCC (Digital Computer Controls) also made a prototype model 216 which was a PDP/11 clone but it turned out that the PDP/11 I/O instructions are UNIBUS dependent and DEC had patents on the UNIBUS. So the prototype model 216 used a different bus structure and hence software written for the PDP/11 would not run on the model 216 without SUBSTANTIAL modification. The model 216 never went into production but the 116 did. Like we just said, we bought one. Sigh.
Cloning computers was somewhat less respectable back then than now. Nowadays, companies like AT&T, NCR, ITT and others make copies of the IBM PC and brag about how close to exact their copy is. So much for pride, and so much for innovation.
Luckily, we unloaded that minicomputer at a loss of only about $2000 but we personally got stuck with the Cipher dual cassette drive for which we paid about $4400. The next day floppy disks were introduced...
The DCC 116 was the FOURTH minicomputer we looked at. The first two were the PDP/11-45 and the Data General Nova. At that time, the biggest, baddest, meanest fire-breathing 60-foot-tall gorilla in the minicomputer world was the PDP/11-45. DEFINITELY king-of-the-hill! It had a CPU which was roughly equivalent to an 8MHz 68000, and had an optional floating point math accelerator for about $6600 which had performance about the same as today's Nat Semi 32081. Back then that type of number-crunching capability was DYNAMITE!
So the local DEC salesman, after discovering that we were absolutely serious about buying a PDP/11-45 for use at home, quoted us a system with a very fast hard disk, the floating point accelerator option, a CRT terminal, an ASR-33 Teletype printer and, oh yes: 4K of RAM. Price: about $47,000. Yes, we could afford it - it was our money that started Digital Acoustics, remember?
We checked with Data General, and they quoted a system with a vanilla NOVA, a slower hard disk, CRT terminal, ASR-33 teletype, no math accelerator and, oh yes: 16K of RAM. The Data General salesman proceeded to assert that the DG system was much faster than the PDP/11-45 in the application we were interested in: running BASIC programs for (electronic) engineering optimization.
Faster than a fire-breathing PDP/11-45? HAH!
Then the DG salesman gave us some references of folks who had benchmarked both systems. (One, we remember, was a professor in Tennessee.) All of these folks reported that the DG ran faster than the PDP/11-45 when running BASIC programs. We were eventually forced to admit that the DG salesman was right, but we did NOT understand why. Seeing that we were wavering, the DEC salesman offered us that $47,000 list-price system for $28,000 (!). But we wanted FAST, not cheap, so we reluctantly declined the offer.
Next, a guy named Norm Winningstad, whom we had met while we were employed as a technician at Tektronix, dropped by our hilltop house to see our somewhat largish speaker system. He had a newish company so his secretary and sales manager came along and they walked away with a check for $1,595. That represented a 10% down payment on a 32-bit ECL minicomputer which they proposed to manufacture. We understand that was the only order they ever got for that proposed machine, but they used it to show some venture capital folk back in Minnesota (home of CDC and Seymour Cray) that there was a market for a really fast minicomputer. We are told that Norm and his newish company, called Floating Point Systems, very nearly got that venture capital but close doesn't count. Norm and FPS were forced to stick with floating point accelerators, and FPS's sales are now only about $70 to $120 million a year (we forget the exact amount). So the FPS 32-bit ECL-based mini was the THIRD minicomputer we looked at.
(About half a year after Norm's visit we made the BIGGEST mistake of our life: forming Digital Acoustics to pursue the environmental noise market. Unfortunately, the environmental noise market proved to be just politicians exercising their vocal cords; there proved to be no money backing the rhetoric. Digital Acoustics proceeded to lose over $100,000 in its first 18 months in existence. We wound up dumping a tad over $250,000 into Digital Acoustics to keep it alive. That was $250,000 of our PERSONAL money, money we had earned and paid federal and state income taxes on. And those were 1972 dollars, not 1985 dollars. Sigh. Well, at least we never bought any Viatron Computer stock. You DO remember Viatron Computer?)
Now, of course, we know that the fire-breathing PDP/11-45 was slow primarily because, with only 4K RAM, the disk would CONSTANTLY be swapping stuff back and forth. With only 4K, not even the BASIC interpreter could be resident, much less the user's BASIC program! And DEC's BASIC interpreter did not know that that $6,600 math accelerator stunt box existed.
UNIX dates back to this time, and guess what? UNIX spends a LOT of time swapping stuff back and forth to disk, which is why it needs a big, FAST hard disk.
So why didn't we buy more RAM? Well, that was 1972 and RAM that would work with the PDP/11-45 was VERY expensive. Partly because it cost a lot to make and partly because minicomputer manufacturers then wanted HUGE markups on their RAM. They still do. Wang Labs, today, wants $1200 for a 32K upgrade for their 2200 LVP, we kid you not.
Knock-off NOVA clones like the DCC-116 weren't the only things which raised the ire of the Data General folk. DG was selling a 16K RAM card (based on core, not integrated circuits) for about $6,600. At the same time, a company called Keronix was selling a form, fit, function equivalent 16K card for exactly half the price - $3,300. As we recall, Data General employees, singly and severally, made no secret that they were hot under the collar over this, in their opinion, "unfair competition"!
In 1972, a very close female relation of ours lived on Harvard St. in Santa Monica, one and a half blocks north of Santa Monica Blvd. When we visited, we would drive up the Santa Ana Freeway, turn off onto the Santa Monica Freeway, and drive west until we reached Santa Monica. If we had driven to the Lincoln Ave turnoff, turned right, and then quickly turned right again and parked our car by the curb, we would have found ourselves parked right in front of the location which, four years later, became the Colorado St. location of Dick Heiser's "The (original) Computer Store", the first retail personal computer store ever. But since we didn't want to wait four years, we would take the Cloverfield/26th St. offramp.
When we would pull up to the signal on 26th St at the end of the offramp, we could look across the street and a short half block north and there, behind the filling station on the corner, was the Keronix plant. Since we read the Keronix ads in ComputerWorld back then, and since we briefly owned a minicomputer for which the Keronix boards were the cheapest available memory expansion, we had cause to notice the building. Besides, it had this big "KERONIX" sign painted on the side. You couldn't miss it.
One Sunday night a fire broke out in the production area of the Keronix facility. When the firemen arrived, they found a man in the production area who had been overcome with smoke. They also found an empty gasoline can close by. The arrival of the firemen evidently saved the life of the guy who had been overcome with smoke. When he was revived at the hospital, there were Santa Monica policemen standing by his bed to ask him some questions. It seems the policemen, perhaps influenced by the man's presence inside the Keronix plant on the Sunday night the fire broke out and also perhaps influenced by the man's proximity to the empty gasoline can at the time the firemen arrived, suspected that he might possibly have SET that fire! Tsk!
So the police asked the man his name, address and occupation. He refused to answer!
After several days the man's identity was finally traced. It turned out that he was a private detective from Philadelphia, PA. It also turned out that a very large proportion of his work was for one largish legal firm in Philadelphia. By an enormous coincidence, that particular largish legal firm had a Massachusetts-based minicomputer firm as one of its clients. You will never guess which minicomputer firm. Awww, you guessed right! It was Data General.
Now, this is a terrible thing to say, but some folks at the time actually suspected that Data General might have something to do with that Keronix fire, which did not appear to have natural origins. However, the evidence was purely circumstantial and while that theory was never DISproved, it was never PROVED either. We are now going to quote one paragraph from the paperback edition of Tracy Kidder's 1982 Pulitzer Prize-winning book, THE SOUL OF A NEW MACHINE (Avon 0-380-59931-7, $3.95), page 23:
"Some years back, in the early 70's, a company called Keronix accused Data General's officers of arranging the burning of a factory. Keronix had been making computers that performed almost identically to Data General machines. The theory was that Data General had taken a shortcut in attempting to get rid of this competitor. In time, the courts found no basis for
However, the Santa Monica Library maintains complete microfilm copies of the local newspaper, the Santa Monica Outlook, all the way back to the 1880's and hence would have the local stories on that fire and its aftermath. ComputerWorld, which at that time was published biweekly, ran a front-page feature story on the Keronix fire containing essentially the information we just reported but in somewhat more detail.
those charges and dismissed them. Indeed, it seemed preposterous to think that the suddenly wealthy executives of Data General would risk everything, including jail, and resort to arson, just to drive away what was, after all, a small competitor. But Wall Street didn't see it that way, apparently. When Keronix made its accusation, Data General's stock plummeted; there was such a rush to unload it that the New York Stock exchange had to suspend trading in it for a while. More peculiar was the fact that many years later, some veteran employees, fairly far down the hierarchy, would say privately that they believed someone connected with the company had something to do with that fire. Not the officers, but some renegade within the organization. They had no basis to say so, no piece of long-hidden evidence. It seemed to me that this was something that they wanted to believe."
Kidder did not get all his facts right - Keronix made memory boards, not minicomputers - and he 'investigated' the situation many years after the fact. We do not believe that Kidder was aware of some of the details we listed above. Still, the evidence against Data General is purely circumstantial. We would like to emphasize, in case any Data General legal beagles are reading this, that we have NEVER accused Data General of causing, abetting, or being associated in any way with that Sunday night Keronix fire of unnatural origin.
However, for several years afterward we would, as a private citizen and in a kidding manner, ask a Data General minicomputer salesman named Jim (Danielson?) "Has Data General fire-bombed any more of its competitors?" when we would chance to meet.
In case you were wondering, the above is completely factual and is definitely not an 'April Fool' story.
In the last issue, at the bottom of p6, col 1, we wrote briefly about bit-mapped displays. Judging by several letters we received on the subject, we need to discuss this in more detail. If you are not interested, you might want to skip ahead several pages. If you're still reading, let us explain that we do in fact have some expertise on this subject because we and James S. have developed two high-res graphics systems which both work and are both in production and are being used in $24K to $38K CADD systems.
We are going to begin by asserting that we are going to refresh the CRT display every 1/60 of a second. That is almost a defacto standard, although IBM saves money on its PC text display by refreshing every (ugh!) 1/50 second. So we are going to plot every pixel once every 1/60 second, or 16,667 microseconds.
The next thing you need to know is that CRTs do not display about 25% of every horizontal line, and that about 5% of that 1/60 second is taken up by the vertical retrace (these are typical numbers for high-quality CRTs). Therefore, only 70% of that 16,667 microseconds can be used to update the display. So about 11,667 microseconds are available to actually plot pixels.
The one bit-mapped display that most of us are somewhat familiar with is the one on the Mack. That display is 512 X 342 pixels, and the display refresh memory is 16 bits wide. So we have to display 175,104 pixels in 11,667 microseconds, and we fetch those pixels from memory as 10,944 words - again, in those 11,667 microseconds. This gives us a pixel rate of 15.0MHz or a pixel period of 66.7 nsec. And we have to fetch a 16-bit word from memory every 1.067 microseconds, appx.
Suppose that we divide that 1.067 microsec into 8 clock periods, resulting in a clock of about 7.5MHz. That is (very conveniently) equal to the pixel clock divided by 2, and is about the right clock for an 8MHz 68000. Two of those clock periods are equal to 267 nanosec, just about time to do one memory fetch from 150nsec RAM. So we can steal 2 cycles of every 8 for the display during the 70% of the time that pixels are being plotted and steal none while pixels are not being plotted.
By stealing 2 of every 8 clocks from the (assumed 68000) CPU during the 70% pixel plot time, we are slowing down the CPU. The ratio is:
SLOWNESS FACTOR = 1/(.7*6/8)+.3) = 1.21212
In other words, by sharing the CRT refresh RAM with the CPU RAM, we cause our machine to run 21.2% slower than if the refresh RAM were separate (as is the case with Digital Acoustic's graphics display boards). Remember, that's 21.2% slower than the actual 7.5MHz clock. And THAT is 29.3% slower than the rated 8MHz performance of the CPU used!
You have probably noticed that the numbers above are not quite exactly those of the Mackintosh, although they are VERY close. The reason is that the Mack does not use a particularly high quality monitor (where the horizontal and vertical retrace are concerned) and so uses a bit less than 70% of the time to draw pixels. However, we CAN use the above numbers to extrapolate the performance of various hypothetical "Super-Mack" improved bit-mapped displays, especially since such displays would presumably incorporate a higher quality CRT than the one used in the Mack.
Let's begin with a monochrome display just like the one in the Mack but with improved resolution. We will use 640 X 400, which fits nicely into 32K RAM. Our refresh RAM is still shared with the 68000 CPU and so is still 16 bits wide. We need to plot 256,000 pixels (and fetch 16,000 16-bit words) in 11,667 microseconds. That is a pixel frequency of 21.94MHz or a pixel period of 45.57nsec, and a word must be fetched every 0.729 microseconds.
NOW WE GOT PROBLEMS! A mass-production, cost-sensitive personal computer is very unlikely to use a CPU rated faster than 8MHz or DRAM rated faster than 150nsec. The 267nsec we calculated for a memory read cycle for the 512 X 342 Mack display is nearly the minimum required! And the pixel clock and CPU clock should be derived from the same master clock if we are going to keep a Mack-like design.
Our best bet is to use a 43.88MHz master clock and divide it by 2 for the 21.94MHz pixel clock and divide it by six to get a 7.313MHz clock for the 68000 - very close to the clock rate we wound up with on our hypothetical Mackish machine. Two of these clocks are 273.5nsec - just long enough for one memory read cycle for the display. So far, it would seem, so good - but a very nasty glitch is about to bite us on the ankle!
Remember that we have to fetch a word every 0.729 microseconds to refresh the display, while our 7.313MHz CPU clock has a period of 0.13674 microseconds. That means we have to stop every 5 1/3 clock periods to grab two clocks for a memory word fetch. Either we have to toss out the 1/3 clock or we have to go to a repeating cycle of 5, 5, 6 CPU clock periods per display word fetch. The latter approach would require a redundant
set of two 8-bit latches to maintain an even pixel flow (trust us, that statement is true). So the more cost-effective approach is to use a counter which stretches every 5th clock (during the 70% of the time that pixels are being plotted) by an extra third. This is not an especially simple clock circuit but it could be produced inexpensively in large production quantities via a custom gate array.
So what we have is a computer which steals 2 of every 5 CPU clock cycles during the 70% of the time that pixels are being displayed, and which has a clock circuit which is only practical using a custom gate array manufactured in large quantities. The resulting performance degradation is:
SLOWNESS FACTOR = 1/(.7*3/5.33)+.3) = 1.4409
Which means that the machine will run 44.1% slower than 7.313MHz or 57.6% slower than the rated 8MHz performance of the CPU! We are beginning to slow our CPU down to a walk, folks, just by sharing the CPU memory for refresh.
BUT THE FACTS ARE EVEN WORSE THAN THAT! Here's the rest of the bad news, folks:
Our Mackish machine, which is remarkably similar to the real Mack, ran 29.3% slower than a real 8MHz machine. Our "Super-Mack", with a monochrome 640 X 400 display, runs 57.6% slower than a real 8MHz machine. That means our hypothetical "Super-Mack" design has only 82.0% of the computing power of the "Mackish" machine. But "Super-Mack" has to manipulate 256,000 pixels versus only 175,104 pixels for the "Mackish" machine. And that means that "Super-Mack"has about HALF (actually, 56.1%) the computing-power per pixel that the "Mackish" machine has!
Honest folk will acknowledge that the existing Mack bit-mapped display is a bit slow at times. In particular, a touch-typist can sometimes run several characters ahead of the display. Now, consider that our hypothetical "Super-Mack" design is a tad less than twice as slow as the Mackintosh when it comes to handling that higher-resolution bit-mapped display. We don't think a bit-napped display that is twice as slow as Mack's is livable with!
However, the persons who are involved in these nascent amateur Mack-Hacker groups do not propose to stop at a merely higher-resolution bit-mapped display. Oh, no! They (most of them, that is) propose to have higher resolution AND color! At LEAST three bit planes for RGB!
Let us assume a 640 X 400 color display. A simple one with only 8 colors (3 bit planes). We will keep the 60Hz refresh rate. So far, that happens to be an EXACT description of our POTBOILER graphics board, the one which we used to offer for sale for $550 including a 7220 graphics controller chip. However, this is also typical of the performance specs that the amateur "Hacker-Mack" folk are bandying about, TO BE DONE IN MACKISH FASHION! That means sharing the refresh RAM with the CPU and stealing CPU cycles for the CRT update.
Using the figures we have already outlined, it should be obvious that you cannot share 3 bit planes of 256,000 bits each with a CPU using 150nsec DRAM and an 8MHz CPU. There is not enough time to fetch the needed display information even if we shut down the CPU completely during the 70% of the time needed for refresh! Remember, we need to fetch 16 pixels every 0.729 microsecond, but THREE words are needed to describe those 16 color pixels. That means a word fetch every 243 nanoseconds. Forget it; that exceeds the read cycle specification of 150nsec DRAM.
That means that a principal design objective of those amateur groups is unattainable. It also means that the editors and reviewers on the Mack scene who are happily blathering nonsense about "Color Macks" haven't the slightest idea what they are talking or writing about. This should be no surprise; you have to be able to do simple arithmetic to figure out the design and performance parameters of a graphic display and we have long since concluded that a lot of the editors, reviewers and other opinion-makers out there in the personal computer marketplace cannot do simple arithmetic.
A bit-mapped display is one which can place an alphanumeric character on a bit-boundary (any one of, say, 342) rather than on character boundaries (one of, say, 25). Most folks would argue that a bit-mapped display is one which permits direct access to the refresh RAM by the CPU to facilitate "raster ops". "Raster ops" are what is going on when you drag down a Mackintosh menu. Bit-mapped displays generally "paint" the alphanumeric characters to the CRT as a series of
memory operations to store the entire dot-matrix rather than storing a single (ASCII) byte in a display RAM which is converted to dot-matrix form by hardware. The "painting" operation is less efficient than you might believe because the characters commonly (as in the Mack, for instance) are NOT aligned on byte boundaries.
"Raster-ops" are generally much slower when performed with a graphics display which uses a 7220 (for instance) graphics controller chip to handle the refresh DRAM. Although our "POTBOILER" color graphics board is one helluva efficient device for handling 2-D CADD color graphics, it is INefficient when used as a bit-mapped display. (If you do not think it is a helluva efficient board, go see a demonstration of a Cascade Graphics model 7 CADD system. We dare you! The model 7 has been benchmarked at 45 to 100 times the redraw speed of the $2,000 AUTOCAD software package running on an IBM PC!)
We have tried to avoid making absolute statements about what a bit-mapped display is, or is not. As far as we know, a common absolute definition has not yet been agreed upon by the small computer industry.
It appears obvious that the future of bit-mapped displays and their associated "raster-ops" is tied to the mass-market acceptance of windows as a user interface technique. And mass-market acceptance (or rejection) of windows is going to be based on mass-psychology whose factors are not yet known. Logical analysis by us or by YOU is going to be totally irrelevent. What we need is careful OBSERVATION: do you see lots of happy, smiling people forking over money for their bit-napped personal computers and forking over more money for lots of window-based software to run on their bit-napped personal computers?
As the politicians would say, all the returns are not in yet. First LISA and now Mackintosh were your initial window-machines. LISA is rotting in her grave and Mackintosh sales are, as Apple honchos have admitted on the record, disappointing. But IBM sold off a lot of PCjrs in December with a massive price cut, sales incentives to the individual salespersons AND - get this - a COLOR version of MackDraw! And the forthcoming IBM PC II is widely believed to feature a bit-mapped display. On the other hand, as one wag recently put it, IBM machines are always better BEFORE they are delivered than AFTER!
This leads us to window programs which are optional - that is, the computer user has the option of using them or not (the Mack environment offers no such option). We can discuss such programs in terms of REVIEWS or in terms of SALES.
Reviews of the various window programs are invariably favorable, with the caveat that they tend to eat lots of RAM and that they are not, uh, especially swift... Sales of (optional) window programs to date have been miserable. It has been estimated that fewer than 20,000 such programs were sold last year. Naturally, as DATAMATION reports, some industry guru had predicted sales of 350,000 windowing software packages in 1984! (Shades of UNIX sales predictions?)
Why not just look up the DATAMATION article (Mar, p.48) entitled "BROKEN WINDOWS" and subtitled "TopView, GEM, VisiOn, Desq, and Windows from Microsoft were all introduced with great fanfare more than a year ago. But few are buying." Or, you can read the front-page feature articles on IBM's TopView in two consecutive recent PC WEEKs. It seems that TopView has now been released and it has fallen flat on its face!
So the news from the marketplace as of today (this is being typed on Mar 10) is by and large discouraging for window fans (please excuse small pun). Before we discuss possible reasons why this is so, let us first ask why anybody or any company would invest the necessary enormous sum of money required to develop a window package? (Let us bow our head for one minute in remembrance of VisiCorp, a large and profitable concern which was utterly destroyed by the cost and ancillary strains of creating VisiOn.)
Well, a windowing package provides a uniform user interface PROVIDED all the software vendors port their application package to that windowing package (as LOTUS, for instance, most certainly will not). But mainly, windows are SEXY! Who can deny the whiz-bang thrill of watching lots and lots of windows magically appear and move and/or disappear? Who stops to think whether all those windows are USEFUL? Gosh! Gee! Lookitthat!
[We hope that our several female-type readers will forgive us for the following sexist comparison.] Windows are a lot like a well-stacked young girl wearing shorts and a tube-top with no bra. She is undeniably sexy, but can she cook or keep house? Is she good with babies or growing children? Hell, we don't even know whether she is any good in bed! A guy would do well to give some thought to these, and other, issues before buying (marrying the gal).
Personal computer users would do well to consider similar issues with regard to windowing programs before forking over about $500 for an optional program. What are the costs? What are the benefits? IN FACT, WHAT DO WE USE WINDOWS FOR IN THE FIRST PLACE?
Many personal computer industry observers believe that IBM's TopView will be the bellwether of the window-software industry. If TopView, with IBM's money and marketing and prestige behind it, flops then windowing is dead except for certain cutesy Yuppie-type computers like Mack. Well, TopView has been released and early indications are that it has flopped BADLY. Why?
IBM PCs, unlike Macks, are generally not purchased as adult toys. They are commonly purchased to do one job and one job only - word processing (WordStar or MultiMate), spreadsheets (Lotus 1-2-3 or Symphony), communications (Crosstalk). The first problem with TopView is that it takes up nearly 200Kbytes of RAM, we kid you not. Since IBM's BIOS limits the PC to 640K RAM, that can be a problem even for the few folks who have all 640K filled.
The second problem is Lotus 1-2-3 and the very snappy screen response provided by 1-2-3, which bypasses the ROM BIOS and directly addresses the display hardware. 1-2-3 customers have become accustomed to that snappy response, and it is almost totally responsible for the enormous sales advantage enjoyed by Lotus over such programs as Context's rather leisurely MBA. Lotus is understandably reluctant to degrade the performance of 1-2-3, reducing its (substantial) advantage over its competitors. (Actually, 'reluctant' is not the word: try 'refusal', as in "Hell, no"!)
The third problem is that an important "advantage" of window programs is that they permit rapid switching between two or more applications. We say again, most IBM PC-type personal computers are purchased for ONE PURPOSE AND ONE PURPOSE ONLY! Why pay $500 for a program which eats 200K RAM and slows down your ONE application? Why, indeed?
But this does not apply to YOU, dear reader! Heck, no! You use your computer for LOTS of different applications, and you want to switch between them rapidly. Besides, you like tube tops, er, windows. So take your IBM PC and load up TopView (kiss 200K goodbye). Now load your first application, Lotus' 1-2-3 (kiss another 300K goodbye if you are using even a moderate number of rows and columns on your spreadsheet). Now load MultiMate. Ooo! We seem to be running a tad short of memory! Let's see, now... load Crosstalk, the communications program. Oops! We seem to be out of memory here!
Naturally, the above scenario is a fantasy because Lotus refuses to produce a crippled version of 1-2-3 which will crawl, slowly, under TopView. Are you beginning to see why TopView has not caught on in the mass marketplace?
The captive window programs, such as the Mackintosh operating system, have a better shot at success (you don't got no choice, chum!). But have you seen the latest sales figures for Mack? DRI's GEM has a rosy future if it is the only operating environment which is going to be supported on the Jackintosh, and provided that Jack can find the jack to produce the Jackintosh in mass-market quantities.
But it does appear at this time that the optional window programs are doomed. It will be interesting to see if the bit-mapped display of the forthcoming PC II can turn this situation around.
When we say that a 'Mackish' color display is possible, we mean of course that the CRT refresh RAM cannot be shared with the CPU RAM - we proved that the CPU RAM simply does not have the required bandwidth - but that the EFFECT of shared memory could be achieved. That means raster ops. An example would be our Tandy 2000.
Our Tandy 2000 is also a good example of how drastically bit-mapped graphics can clobber performance. In fact, we REMOVED the color graphics adapter from our 2000 because it made the machine LUDICROUSLY slow. This despite the fact that the Tandy 2000 has by far the best color graphics of any machine under $5000. Nobody else is even close. (The Mindset's graphics are a joke next to the Tandy 2000's, unless you want sprites for rock-shooting.)
One way to mix graphics and text is simply to use an exclusive-or gate to mix conventional 25-line, 80-column text with graphics. This requires the pixel rate of the text and graphics to be precisely synchronized, and does NOT permit text to be aligned on bit rather than character boundaries. It also leaves the text with a fixed size. We don't think most folks would consider such a system to be a bit-mapped system.
We believe that the most cost-effective solution to the text-cum-graphics problem at the moment is to use a dedicated text video circuit with nice, conventional 25 lines and 80 columns as the system output. This circuit should be memory-mapped like the Apple and Pet/CBM text screens, NOT at the other end of an RS-232 cable. Graphics then become optional, and require a second CRT.
This is not so bad. It allows persons to select the graphics dictated by their taste and their pocketbooks. We can have NO graphics, cheap low-res monochrome, crummy 320 X 200 color graphics like the IBM PC, good 640 X 400 color graphics like the Tandy 2000, only with a 7220 to cure the speed problem (that's our POTBOILER), or high resolution color graphics (like our VDHR-3 if somebody made a case for the big, expensive CRT and the VDHR circuit boards). It is nice to have a choice.
What Mack does not give you is a choice. And the various 68000 hacker groups, as far as we can tell, do not plan to give you a choice either. The graphics will be just one configuration which will presumably be the configuration chosen by the person who can yell the loudest or something...
(You don't think those amateur hacker groups are run as a democracy under Robert's Rules of Order, do you?)
T.I.'s NUMachine has just bitten the dust. This one is based on the 68000 and a bus, originally called the NuBus, which was developed and patented by MIT. You could buy this thing in a rack-mounted version (for instance), and the price would be a mere $53,470 - each - provided you bought 25 at a time. T.I. also made a cheap version of the machine which sold for $36,240 if you bought 25 at a time. T.I. built a grand total of 300 of these machines in two years.
Bozo the clown (aka Encore Computer Corp) has his problems too. You will remember that Bozo, or Encore, is the outfit with lots of EXTREMELY high-powered and expensive executives - one is Gordon Bell, who is credited with inventing the minicomputer and was DEC's VP, Engineering prior to leaving for Encore. Unfortunately, Encore's business plan seems to have been prepared by Henny Youngman (take my wife... please!) and Curly and Moe of the Three Stooges.
Encore is now working, we believe, on plan C. This time they are going to sell microcomputers based on Nat Semi's 32032. The microcomputers will be rated at 1.5 MIPS to over 10 MIPS. 1.5 is what you get with one 32032 CPU (provided Nat Semi can ever make the 32032 run at 10MHz) and 'over 10' is what you get when you combine, in an effective manner, lots of 32032 CPUs. Most of you know how easy it is to combine multiple microprocessors in an effective manner to achieve, say, a ten-fold improvement in performance, right?
Oh, yes, the price tag: Encore is going to be selling the single-CPU version for (are you sitting down?) $120,000 dollars! The multiple-CPU models will go for up to $1 million, so be sure to place your order early.
(continued from the front page)
Or, ONWARD and UPWARD! The folks at STRIDE have moved to the front of the avant-garde folks dedicated to improving the man-machine information bandwidth.
STRIDE now sells a gizmo which requires the user to wear a small reflector on his or her head (they report that a 5/8 inch reflector mounted on the end of a soda straw, with the straw worn over one ear, works great!) The user then nods his/her head to select a menu item on the screen. The gizmo has photo-receptors which 'catch' a reflected infrared (invisible to human sight) light beam and thereby sense the position of the head. This new commercial product is called the NOD!
All right, Buddy F.: just exactly HOW do we convince our readers that this item isn't an April Fool spoof?
It has become de rigeour for UNIX publications to announce new UNIX-related vaporware with the headline "UNIX GATHERS MOMENTUM AS..." This has often puzzled us, since rational persons with a nose for the facts can easily see through that P.R.-related BS. When we called this state of affairs to the attention of an editor at UNIX/WORLD, that editor replied that UNIX did, too, gather momentum as a result of that flackery! Sigh...
MOMENTUM IS A MEASURABLE PHYSICAL QUANTITY and is the product of mass and velocity. In the cgs (centimeter-gram-second) system, momentum has units of gram*centimeters/second. (If you thought a gram was a unit of weight, that is technically not correct but at the surface of the earth it is a close approximation. A gram-mass in orbit seems to have no 'weight' but it still has mass!)
A fellow computer journal editor reports that "white water" - rapids - is rated on a scale from one to six. A "one" would not wake an infant asleep on a rubber raft. Niagara Falls is a "six"! We will adopt this scale to the measure of momentum, wherein the momentum of a mayfly struggling through the air at one Km/hr is a one, and a galaxy-sized black hole careening through the universe at .99999 c is a six! Since this scale is intended to measure the momentum of UNIX public-relation type flackery, we will name this unit of measurement the "Gill", after that UNIX/WORLD editor.
We have examined several UNIX announcements lately and we have yet to find momentum greater than zero, even on the Gill scale. No substance (mass) there at all, and zero times whatever is zero...
The magazine MINI-MICRO SYSTEMS has in the past been among the worst offenders when it comes to printing obviously exaggerated claims with respect to UNIX (remember $2 billion in XENIX sales in '83?). Times have changed; here are a few quotes from the Feb issue:
"The drive to make UNIX the minicomputer industry's single operating system appears stalled... Even its strong advocates seem resigned to UNIX's becoming a guest in many houses rather than the overall owner." (p.149)
"Many market analysts seem reticent about UNIX's chances of becoming the prevailing, native operating system... 'UNIX is definitely not becoming the industry standard,' says Damian Rinaldi... 'In fact, there really hasn't been a commercial demand for it.
Here is the sales and "profit" record of Fortune Systems for the seven quarters following its IPO (Initial Public Offering). Not shown is the $3.75 million loan it made to North Star. Fortune raised $110 million when it went public in the spring of '83. Late last summer Campbell, Fortune prexy, was bragging about the over $40 million cash that Fortune had in the bank. Uh - just how much cash do you think Fortune has NOW?
CALENDAR SALES NET 2ND '83 $12.0M ($3.0M) 3RD '83 $9.1M ($9.1M) 4TH '83 $12.6M ($6.6M) 1ST '84 $15.1M $(3.4M) 2ND '84 $20.3M $0.04M 3RD '84 $16.8M $(3.7M) 4TH '84 $18.0M $(15.0M) ------- -------- $103.9M $(40.7M)
You will remember that Fortune was going to make a fortune for its investors selling inexpensive 68000-based UNIX boxes. There appears to have been a defect in that plan someplace. Could the defect be UNIX? The Alpha Micro machines are VERY similar to the Fortune boxes but Alpha Micro uses a different operating system - an efficient one written in assembly - and Alpha Micro is quite profitable, thank you!
Examining the profits of inexpensive 68000-based box-maker Wicat BEFORE it adopted UNIX and Wicat's losses AFTER it adopted UNIX is left as an exercise for the student (read: Rod, Frank, Bill).
There doesn't seem to be a lot of people knocking down any vendor's door for UNIX-based systems.'" (p.150)
"Brian Boyle, managing analyst of software for market research company Gnostic Concepts: '... I think it is probably going to be the industry standard, but by default... for the same reason that certain bars in New York get popular because everybody goes there. It's a trendy product.' (p.150) Gee, Brian, UNIX was SPECTACULARLY trendy two years back and was trendy a year ago, but TODAY??
On p.157 there is a sidebar covering UNIX's rivals, which begins thusly: "UNIX has competition. Less than a year ago, the conventional wisdom held that UNIX would drive every other operating system out of the multiuser minicomputer market. Conventional wisdom was dead wrong."
Wha hoppen to the bandwagon, guys? Wha hoppen to all that momentum? Did you notice that the preceding paragraph asserts that UNIX is not even gonna win the MULTIUSER MINICOMPUTER market? Remember, two years ago there were an awful lot of dummies who thought UNIX was going to take over the single-user personal computer market!
(There are always die-hards, some with axes to grind, naturally. On p.160 an AT&T spokesperson wants all work on competing operating systems to be halted immediately and all the programmers put to work on UNIX applications software!)
It gives your FNE great pleasure to report this (and other) sudden outbreaks of sanity in the small computer industry with respect to UNIX on account of we've been telling you for over two years that UNIX was NOT going to take over the small computer world - and when we started telling you that we had DAMNED LITTLE COMPANY! In fact, our prognostications in this regard were obviously superior to just about any small computer industry 'expert' you would care to name.
[After that 'win' we almost don't mind blowing our prediction that the IBM PC2 would be based on the 80186. As of today (Feb 17) it appears that it will sport an 80286, bugs and all.]
ALMOST everybody, including us, agree that UNIX is not going to die out. It is very useful for program development and other applications which work primarily with text or character data and hence can tolerate a slow, inefficient operating system. With a user interface based on RS-232 serial communication UNIX is obviously unsuited for graphics applications. With a fundamental design based on sharing a single CPU amongst N users, UNIX is obviously unsuited to the 'AT LEAST ONE CPU PER USER' market.
Why, then, is Sun Microsystems being so successful in selling 'UNIX' workstations featuring high resolution graphics (much like our VDHR systems, in fact), and which seem to assign an entire 68010 CPU per user in a networking environment? We think it is a lot like the high-fidelity loudspeaker market of 25 years ago: Loudspeakers with 'flat' frequency response were very popular, so all loudspeakers were advertised as having 'flat' response. On the other hand, in A - B tests in the showroom customers (then) invariably preferred loudspeakers with a strong peak in the bass - in other words, VERY 'un-flat' response. So all the manufacturers - the successful ones, anyhow - made certain that their advertised-flat speakers had a strong peak in the bass!
It looks to us as if Sun Microsystems is selling what the customer wants. 'UNIX' is still a potent buzzword amongst the great unwashed, so Sun sells a 'UNIX' system. High resolution graphics, one-CPU-per-customer and networking are also things that are popular with buyers, so Sun makes certain that its 'UNIX' systems feature high resolution graphics, networking and an entire CPU per customer! This is, of course, genuine UNIX...
If we get to call anything we want 'UNIX', then there is still a chance for it to take over as the industry standard operating system. The name really IS still popular amongst Kornbluth's Marching Morons. Now, all we have to do is rename 'MS-DOS' 'UNIX' and we have accomplished our goal, yes?
What's that? You say we have forgotten that UNIX is backed by industry giant AT&T? Well... (blush) we DID forget, for a while. That is, we forgot to keep a close eye on AT&T. Fortunately, Omri Serlin did not. Serlin publishes two industry newsletters: Supermicro and FT Systems. Since we really can't afford those newsletters, it is a good thing that Serlin publishes (what seems to be) a subset of Supermicro in UNIX/WORLD called Inside Edge.
(Serlin seems to focus mostly on minicomputers in the $10K - $150K range with an occasional mention of really cheap stuff like the IBM PC/XT. WE, on the other hand, concentrate mostly on personal computers in the $500-$15K range.)
Back in issue #36 of this rag, we reported what to us was an inexplicable series of events. (See "A CONVERGENCE OF INTERESTS" beginning at the end of page 11 and continue to the end of column 1 on page 12.) A mere vice-president of AT&T (which has MANY vice presidents), one Jack Scanlon, in effect called vice-chairman James Olson a liar and flatly refused to do something Olson essentially ordered him to do. We were utterly astonished by the sequence of events, and we hope our writeup reflected that astonishment. We wrote in part: "You may wonder why James Olson does not grab a large, industrial strength fly-swatter and apply it in the obvious manner to one Jack Scanlon."
Here is a follow-up, which should be credited entirely to Omri Serlin and UNIX/WORLD (Mar p.17 ff):
"The dust has settled at AT&T... it is clear that Information Systems scored a major victory in its battle to gain control... AT&T-IS also absorbed Jack Scanlon's Computer Systems Division and will now become the sole source of AT&T computers to the commercial marketplace [emphasis added]." Pay attention, now:
"The surprise announcement came from Jim Olson, chairman of AT&T Technologies." SURPRISE announcement? SURPRISE??? Apparently it takes a few months to wield that industrial-strength flyswatter, but Jack Scanlon has been well-swatted!
As Serlin reports, this has further consequences:
"...CSD under Jack Scanlon has been waging an expensive campaign to establish the UNIX system as a universal standard... One outcome of this victory will probably be a lessening of AT&T's UNIX System V zeal. The campaign to make System V a universal standard has been a significant resource drain, something AT&T can ill afford as it is laying off 11,000 people [others say 25,000-FNE] and cutting back all over. "
"AT&T-IS' view is that it must be allowed to sell what the market wants, whether or not it conforms to the 'pie in the sky' views of [Jack Scanlon's] technologists. AT&T-IS does not particularly care who made the product, as long as it moves and generates revenues and profits."
Serlin sees the "takeover of CSD" as the "culmination of a long internal confrontation between... AT&T-IS and CSD." Uh, uh, Omri - Jack Scanlon publicly humiliated (if temporarily) perhaps the second-ranking executive in AT&T and was very promptly (by huge corporate standards) swatted down. Oh, yes: about that court order to maintain a separation of of facilities: AT&T IGNORED the court order!
Crusty-dragon Scanlon has had his fangs pulled, his claws clipped, and his fire extinguished. AT&T did not even throw Scanlon a small bone. Jack is now in the process of being "dehired", which is very different from being fired. A person who is being dehired suddenly finds himself losing virtually all authority and often finds himself reporting to a former subordinate. He is moved to a smaller office with one less window. His faithful secretary of many years' service is transferred and a new secretary, one he has no part in selecting, is assigned to him. He is given the task of developing a marketing plan for Chad and Madagascar - and is told to be sure and complete that marketing plan in two or three years! This dehiring process takes, on average, about 8 months and terminates, of course, when the de-hiree quits.
Sometimes one can be profitably dehired. An example:
Once upon a time Hewlett-Packard was an instrument company. When folks started to tie H.P. instruments to minicomputers, H.P. decided (as a small sideline) to go into the minicomputer business. Sometime around 1972
H.P. assigned one Paul Ely to nursemaid H.P.'s small minicomputer operation and then turned its back on him.
Recently (1984) the high priests of H.P. executivedom were suddenly horrified to discover that the dinky little computer operation, all by itself, was LARGER THAN ALL THE REST OF THE COMPANY PUT TOGETHER! And the no-longer-dinky operation was still headed by Paul Ely, who held a relatively low rank among H.P.'s high priests. The high priests convened a secret emergency meeting:
"Hey, look! All us important executives have, between us, control of less than 50% of the company. That #&*%@$ upstart Ely, the low-born bastard, is in SOLE AND EXCLUSIVE control of over half the company! Just exactly HOW did this revolting state of affairs eventuate?" (Obviously because Paul Ely was doing a good job, but never mind rational stuff like that!)
The important high priests decided to re-divide the H.P. pie and to include Ely O - U - T OUT! Ely found himself reassigned to the Medical Instruments Division, which for an electrical engineer is less desirable than preparing a marketing plan for Chad and Madagascar.
Observing that he was being dehired, Ely took his time looking for other opportunities and has emerged as CEO of Convergent Technologies, whose explosive growth had carried it beyond the capacity of its original top managers. This deal looks (to us) good for Ely and good for Convergent Technologies. The guy Ely replaced is the guy who arranged to hire him, so the usual frictional office-politics accompanying a change of command are minimized.
So the Paul Ely dehiring process has been completed - to Ely's eminent satisfaction! Do not forget that Jack Scanlon, whose dehiring process has just begun, was THE major obstacle preventing Convergent Technologies from producing a computer for AT&T based on AT&T's 32-bit micros and their 256K DRAMs.
It is ironic that the pressure to support UNIX System V has dropped just as AT&T is preparing to introduce its Convergent Technologies-made 68010-based Model 7300... which apparently will use System V as its operating system...
(Did you think that great big corporations and their highly respectable top executives were above petty office politics, as in bickering and back-stabbing? Of course they are! Just ask IBM's Phil Estridge, who got PC sales yanked from under him because the other IBM cheeses were getting jealous of his success and the size of his corporate domain! But Phil got the last laugh - he just received a BIG, BIG promotion!)
After several issues of relative rationality, UNIX/WORLD is backsliding a bit. In March, while competing UNIX REVIEW finally spends an entire issue pointing out that UNIX is not portable, U/W has decided to become an apologist for UNIX (Pressure from advertisers? Naaah...) On p.63 U/W informs us that 1) Yes, UNIX is slower than other operating systems, but 2) That's O.K., because UNIX will use faster processors and other hardware than other competing operating systems. HUH? WILL YOU RUN THAT PAST US AGAIN?
The same apologist on the same page tells us that "A port to a non-UNIX system machine could take three programmer years!" (We LOVE to read such factual information in our technical journals!) We can imagine that vendors of software for the 3 million MS-DOS machines or the 2 million-plus Apple II machines or the 1 million-plus CP/M machines are simply worried sick over not being able to port their software to a UNIX system with an installed base of 350 users. Uh, did we forget those 3 million C-64s?
But of those two problems above, the assertion that it is O.K. to run slowly and inefficiently is the most pernicious. Unfortunately, that attitude is common:
The Feb 18 issue of EN carried a headline story titled "Barrage of 32-Bit Alternatives Challenges Motorola, Intel MPU Supremacy". Although the principal focus of that article was the substantial number of 32-bit micros coming to market, a side theme brings chills to the blood of right-thinking persons! (There ain't nobody here but us right-thinking persons, right?)
You see, there will be no software problem for all these different machines with all those different instruction sets because everything will be written in a high-level language! And sure that slows things down but that's O.K. because the new machines are so fast... haven't we heard this one before?
Most of you readers know by now that Intel's microprocessors are not our favorites, and many of you have sensed that Intel the company is not our favorite either. It is true that we are prejudiced against tee-niny linear addressing ranges, limited numbers of general-purpose registers and a lack of candor (candidate for understatement of the year) with respect to bugs in microprocessors which are in mass production. We don't even like the fact that IBM owns such a large interest in Intel.
That having been said, let us also acknowledge that today's world is one of shades of gray, not absolute blacks and whites. Let us tell you about some things which Intel has done right for the larger personal computer user community:
When Motorola made the move from the 6800 (two zeros) to the 68000 (three zeros) the change was rather dramatic. The instruction set was brand-new and the performance was an order of magnitude greater - right in the minicomputer range, in fact. Naturally, Motorola had to hire an entirely different group to run the 68000 program. The new group tried to keep 68000 software prices high and they tried to keep the 68000 out of the personal computer marketplace. THEY WERE SPECTACULARLY SUCCESSFUL AT BOTH OBJECTIVES!
On the other hand, Intel made the move from 8 bits to 16 bits two years earlier than Motorola with the 8086. The 8086 was therefore less advanced than the 68000, and nobody was claiming the 8086 would outrun a PDP11-45. As a result, Intel did NOT go out and hire a bunch of minicomputer types to run the 8086 program. And IBM, noting that there was an 8-bit version of the 8086 called the 8088, and noting that all those warehouses full of 8080 code could be translated to run on the 8088, adopted the 8088 for its personal computer. You know, the one with 16K RAM, BASIC in ROM, and an audio cassette interface. (Sometimes it's better to be lucky than to be good.)
The combination of the fact that the 8086 was upward-compatible with the 8080 via source translation and the fact that it was not all THAT advanced over the 8080/8085, plus the fact that Intel did not have a bunch of new minicomputer types running the show, prevented the 8086 from being taken over by the kinds of idiots who want to write everything, including operating systems and high-level languages, in a high level language. Personal computers continued to use Intel micros and to run them 'mean and lean' in assembly language.
After two years of spectacular success in attaining its desired goals - to keep 68000 software prices high and to keep the 68000 out of the personal computer marketplace - Motorola suddenly saw IBM and Intel taking over the personal computer marketplace. Motorola reversed its objectives but unfortunately it did not replace the minicomputer types who were hired while its old policy was in force. You can imagine the disdain, dismay and consternation the minicomputer types felt when told that they were supposed to kiss and make up with the toy vendors, er, personal computer manufacturers!
Motorola's 68000 continues to be less than successful in the personal computer field, and the reason is those minicomputer types running Motorola's show who, revised policies or no revised policies, want nothing to do with (ugh!) lowly personal computers. The one personal computer which is somewhat successful in the U.S. market, the Mackintosh, was developed entirely without Motorola's assistance (as was a certain attached processor with lesser sales).
Intel, on the other hand, has developed a succession of microprocessors - the 8086, 80186, 80286 - which have advanced Intel's performance range right up to that of the 68000. In addition, Intel has had working math chip coprocessors in production for years now, while Motorola has not yet accomplished that. Because Intel has reached the 68000's performance range in stages, at no time was there a sufficiently dramatic advance in performance to tempt Intel management to hire an entirely new staff of minicomputer folk to run the show. The fact that Intel has maintained backwards-compatibility with all that 8080 software in all those warehouses is another reason not to ring in new folks.
(The iAPX 432, may it rest in peace, may have been an exception to the above. We all know how important the 432 has been to the personal computer marketplace, don't we?)
So what we have is an organization - Intel - which has achieved a level of performance nearly equal to the 68000 and which is staffed by folks who are downright friendly toward personal computer manufacturers.
Just as Intel has maintained a continuity of personnel - and policies - in moving from the 8080 to the 80286, the software vendors who have moved from the 8080 and CP/M to the 8086/8 and MS-DOS have maintained a continuity as well. And that means that the same folk who programmed the 8080 in assembly because it was necessary are, today, deciding to program the 80286 in assembly to maintain a performance advantage over newchums who are spinning their wheels with HLL-based software .
So as long as folks continue to have the attitude that 68000s are exclusively useful in incredibly complex systems you and we are not going to be able to use the 68000 in the MANNER WHICH IS BEST FOR US - and that manner is simple, assembly-based systems such as those which are commonly found in the IBM/Intel world. Even the Mack is too complicated, as anyone who has ever spent too damn long waiting for Mack's disk to stop spinning or swapped disks too damn often can tell you.
The whole idea behind HALGOL is to develop a 68000 environment which is as simple as the simplest IBM/Intel environment but which takes FULL advantage of the 68000's considerable internal resources for ONE task and ONE user. It is hard to believe, but we get a significant amount of criticism because HALGOL is not designed for multiuser/multitasking environments!
[We are printing this letter even though our horsewhip wounds have not yet healed.]
"On page 15 of #38 you ask the question "how do you edit a HALGOL program without line numbers?" The answer of course is "easily". What, you don't believe me? Shame on you! How then is it that I can edit my SC-Assembler source which has an internal structure much like an INTEGER BASIC program using a full screen editor with line widths of up to 255 characters? Or how can I edit my APPLESOFT programs using a full screen editor with line widths up to 240 tokens? I should explain what I mean by a full screen editor since I don't think it is valid to classify the type of editor which you have included in HALGOL as a screen editor since it modifies and ENTERS a line at a time and quite often leaves the actual contents of the screen differing from the source in memory [HUH?]. This process is error prone making it easy to enter rubbish by carriage returning on lines with outputted digits or not to enter anything at all by doing cursor moves rather than returns. Further, the source cannot be scrolled, only listed. What I call a screen editor gives a representation of the program as a document (of width which may exceed the screen) which may be scrolled thru in either direction (with preferably methods of moving by page as well as beginning to end). The editor is of necessity modal but if this bothers you a good way of knowing where you are (command or editor) is to modify the appearance of the cursor. The editors in question that I use were written by Bruce Love, a friend of mine who lives in Hamilton, New Zealand, although I have been using an editor by Laumer Research for the assembler which is similar in appearance for some two years. Bruce's editor is rather more clever allowing a program to be viewed by routine and also allowing cutting and pasting by line and section. The APPLESOFT editor is of course more elaborate since it must provide a way of scrolling fast (would you believe 10 times faster than APPLESOFT itself?) and must also include the ability to renumber to account for lines that may have been inserted (both editors allow lines to be inserted and deleted).
"Because APPLESOFT actually uses the line numbers as labels they must be displayed inside the editor and when a line is inserted a number must be assigned in case the line itself is to be referred to.
"An editor along these lines for HALGOL must be no harder to write than the SCASM editor, certainly nowhere near as hard as the APPLESOFT editor since, as you point out, line numbers are not used as labels. The hardest part is the listing of the tokenized source, since for the editor to be responsive this must be done swiftly. But you have already solved this one. The main trick in doing everything else is to split the source at the current line being edited (I guess even screen editors really edit line by line). One half is kept in high memory and the other half in low memory, the line being edited may be in a separate buffer or at the appropriate end of the "open" half. This means that insertions don't require the movement of large chunks of memory. If the lines are linked away from the line being edited towards high and low memory the source can be scrolled quickly and smoothly. As a line is scrolled it is moved in the appropriate direction and its pointers reversed. On exit from edit mode the two sections of the program are joined back together and the pointers are all restored to their normal state, i.e. linking forward thru the program. Note that inserting and deleting lines always takes place at the cursor and so is particularly easy. Line numbers can be present for 'internal use' without needing to be displayed ever. Programs will of course be automatically renumbered on exit from edit mode. I can provide more technical details of this scheme if you are interested. I am not suggesting that you have to change HALGOL. I agree with you that what you do there is your business, but I think what you say about the impossibility of editing HALGOL without line numbers is false, and I agree with Steve McI (any relation?) [no] that they are better dispensed with.
"Incidentally this brings me to point 2. As I have said I have been using full screen editors for my Apple programming for some time. This includes 6502, 68000 and APPLESOFT (when forced). Yet you have stated on more than one occasion how screen editors are alien to the APPLE world. You have also decried the speed of DOS 3.3 text files (yes I agree they are slow) and how this has made assembling large files painful. If you were just complaining about your personal situation this would be acceptable but you transfer your situation to all others with APPLES and this does not comply with the facts. The assembler I use (and also others that I know such as Big Mac) do not use text files and in fact assemble from disk many many times faster than EDASM. You chose to use the DOS-Toolkit like PHASE ZERO Xassembler but because this left you with an inefficient system does not put everybody else out here in the same boat.
"As you have stated (once by my count) it is possible for the Disk IIs to load data with reasonable swiftness (of course more recent equipment should perform better)
so why do you persist in ascribing this to Woz's not using a disk controller? A disk controller would certainly save memory but at the time the Disk II came out it would not have made for a faster raw data rate and would have allowed much less data on a disk.
"Next I wish to tell you a story. Once there was a hardware firm that produced a new machine based on the wondrous 68000 chip. The machine was introduced with only a small amount of software, mainly graphics oriented. But 'hang on' said the firm, magnificent things are coming, we have this really neat BASIC and it will be out by a week next Tuesday. Well, a week next Tuesday came and went and although a prerelease version of the BASIC was available and it was clear that much work was in progress the final release date seemed to be no closer than it had at first...
"I could go on in this vein but I won't. My point is that I am talking about DTACK, not Mackintosh and you should be careful about the rocks you are throwing hitting the glass walls of your own house.
"As you can tell from the letter you are reading I am a Mack owner. I have also had an APPLE II since 1979, and a DTACK board since 1983. Of all these machines the Mack is the one with the most extensive documentation, the Mack is the easiest one to program, and the Mack has by far the most software at one year old. I have three (yes THREE, that's three more than on the DTACK machines) different native code assemblers (one admittedly is part of the Aztec C package purchased by our department (Mathematics and Statistics at Aukland University) but is a genuinely different fairly conventional macro assembler. One is a pre release of Apple's development system (this seems to be available fairly freely to anyone who looks like they can write programs), and the last is by Mainstay, is commercially available (since November), and works very much like the SCASM assemblers (Bob Sander-Cederlof acted as a consultant in its design). The Inside Mackintosh documentation is vast and extensive (how anyone can genuinely gripe about a $150 price tag is beyond me - I have quite happily paid my nearly $100 [in N.Z. dollars, not REAL money - FNE] for DG and the density of information in IM is orders of magnitude higher) also I should point out that as far as Apple is concerned this is a charge for a service, the documents all carry the legend 'distribution in limited quantities does not constitute publication'. As for your claim that the Mackintosh requires you to do things one way and one way only nothing could be further from the truth. Within the framework defined by the Mackintosh ROMs almost all features are vectored and can be changed at will (my mathematical mind is much intrigued by the fact that in Quickdraw it is possible to 'change the metric' so that drawings can be made on 'curved surfaces'), and at the other extreme
the hardware is so well documented it is possible to dispense with the Mack environment entirely (both the Mainstay assembler and the Aztec C do this). As for the nonsense I have read (in BYTE for example) of the Mack being difficult to program, different maybe and certainly the volume of the documentation is daunting but the description of over 400 routines must occupy some space.
"Lastly I want to take you to task for a statement you made a couple of issues back (I can't remember exactly where) about 'slow relocatable code'. If I didn't know better I wouldn't have thought you were talking about the 68000. I know you must be aware that there is no speed penalty involved in using relocatable as opposed to absolute code on the 68000. Certainly it is usual to dedicate a register to point to a data space and this address must be obtained in using the relatively slower PC relative address mode but after this the data itself will be accessed via some form of address indirect and as you pointed out in one of your earliest issues this is the best there is. There is also no speed disadvantage using branches rather than absolute jumps (there is a bit of swings and roundabouts here but the discrepancies that exist are marginal anyway). As a practical example I have rewritten the Hat demo to run on the Mack (and as we all know this means it has to be relocatable if I want to run it as an application). In fact using the Mainstay assembler I can locate a program absolutely in memory, so as a first step I transported your program with the only changes being the point plotting routines (which take in their entirety less than half a second of the time for the demo). With the threaded interpreter running (using long action addresses) the demo takes 33 seconds, with the same arithmetic routines but the threading replaced by machine language macros the time is 31.5 secs (this is a bit slower than you might have thought but remember that those are all long addresses being used - other benchmarks I have run lead me to believe that the effective clock rate of a Mack running a totally RAM program is 5.1 MHz). When I rewrote the arithmetic I got the speed down to 17 secs and with the sines obtained by using a table lookup (not much more code required) down to 13.5 secs (on my static [DTACK] board that translates to 7 and 5.5 secs. Now I realize that this demo was one of your earlier programs and it looks like the arithmetic was stripped down from the higher precision package [true, and hand-assembled to boot - FNE] rather than being purpose written but I do think that you would have a difficult job writing a faster program even allowing that it need not be relocatable.
"Anyway this is enough for one day. Please keep up the good work, when you speak from your personal experience I find you talk good sense (i.e. I agree with you by and large)." Peter McI. Mt. Eden Aukland N.Z.
Gosh, Peter, we're awfully glad you appended 'Please keep up the good work'! Let's (moan) see, maybe in reverse order?
While we would agree that relocatable code using the 68000 is about as efficient as code written to run in a fixed location which does not take advantage of zero-page addressing, we strongly believe that code - such as HALGOL - which is specifically designed to make the most effective use of the 64K zero page of our DTACK boards is more efficient than relocatable code.
We have NOT EVER griped about $150 for those 1000 pages of Mack documentation! Our gripe is having to LEARN those 1000 pages before programming Mack applications such as, perhaps, HALGOL. Your assertion that the density of information in those 1000 pages is ORDERS OF MAGNITUDE higher than in our rag is NOT reassuring. Just the opposite! We sincerely hope that it is possible to program HALGOL without recourse to 1000 pages of extraordinarily dense technical material which must be LEARNED - not just referenced occasionally.
Our really neat HALGOL came out a week LAST Tuesday!
Once upon a time a Canadian made a lot of money selling encyclopedias, so he moved to Los Angeles and bought the Lakers basketball team. He loved the COLOR purple but hated the NAME purple, so he dressed the Lakers in a purple uniform, which was described as "Forum Blue"! We never sold encyclopedias, so we can't buy the Lakers. But we also have a hangup over 'line editors', since every line editor we have ever seen stinks out the joint! Therefore, we ABSOLUTELY REFUSE to describe the HALGOL editor, which is obviously a line editor, as an (ugh!) line editor! Instead, we call it a screen editor. Perhaps we should call it a 'Forum Screen Editor'? (Apologies to Jack Kent Cook.) We do thank you for pointing out that even screen editors edit one line at a time.
The Apple II environment does NOT feature any sort of editing worth the name. Loading a special program for editing purposes is NOT our idea of an editor. We sincerely believe, and are investing a lot of time and money to prove, that a good computer environment is one which has the editor and the compiler and the run-time code and the user's program code and the operating system all simultaneously resident, all instantly available, without modes, and oh yes: when the user keys 'RUN' the user program is ALREADY compiled!
We believe that if we can successfully demonstrate that such an environment is possible and practical then only a fool or idiot would go back to the bad old methods of having different programs to accomplish what any sensible person can see should be an integral part, but only a part, of the computer environment. And we do
believe that we are very close to demonstrating that such an environment is possible and practical.
'EXIT FROM EDIT MODE?' MODE? Please, Big Kahuna, there will be NO MODES OF ANY KIND in the HALGOL environment except for the RUN mode - we haven't figured out how to get rid of that one.
We believe that the most efficient method of editing a long program is to first get a hard-copy listing. Then, by using line numbers, one can INSTANTLY go DIRECTLY to any given line of the program, and UNFAILINGLY arrive at the correct line number, plus or minus zero. Your alternate method, which you assert is superior, cannot do that. Further, if two sections of the program look similar (see a complete listing of our 40-month-old 'Hand Assembler's Helper' for example) then you may well stop scrolling and begin editing the wrong line/lines - a mistake which will NEVER happen if line numbers are used.
We believe that your favored method of editing, which involves a separate program and scrolling without positive identification of specific program lines, is not merely inferior to our favored method but is VASTLY inferior! True, at this time there is no alternative to your method for the computer science priesthood (unless they use BASIC, which naturally they never will). But the old dinosaur-level techniques of compiled languages is exactly what HALGOL is designed to FIX! Hell, ANYBODY can design yet another compiled language with editor programs, compiler programs, code-generator programs, linker programs, error detecting programs (LINT in the UNIX environment) and other such unwieldy nonsense. HALGOL is intended to be distinctly different, and distinctly better.
And we hope to PROVE that it is better - FNE
(Aside to Dick B: now do you see why we were not interested in your editor, which represents the very latest and best 6809-based dinosaur technology?)
"In RE: Halgol v.03
"Thanks!" Hugh L. Norman OK
Gee, it's nice to get a letter like this one (reprinted in its entirety) every once in a while -FNE
"Monsieur le Foonay,
"Eet ess, so, 'Foonay', 'ow you pronounce your nom, hein? Hwe frogs air not neethair as unperceptive que
les Canadiens! Een token of hwich, to find enclos' the hex' key to the 'Special Dark', tho I have thought that 'Special Dark' was a place reserved for fans of HLL, was it not?
"Le board 'EMS', stuffed by my erstwhile employer, she ran, but a bit like with a heart murmur, tu sait? It's mostlywaiting for me to get some kind of fast serial I/O together..." H.C.H. Ontremont Quebec
Look, H.C., your fellow Quebecan SPECIFICALLY ASKED US (#38, p.17, col.2) to ridicule and abuse him. Rather than break our long record of doing exactly whatever our correspondents ask, we naturally obliged him. And if you manage to get some fast serial I/O working with your 68000 board, we know of a largish personal computer company in Cupertino, CA who could use that on THEIR 68000 machine - FNE
"How can I, as a Canadian, purchase a 512K DTACK Grande and case? Your new policy of February 15, 1985 prevents me from doing so. I've been subscribing to DTACK Grounded for several years, but haven't purchased a Grande yet because as a self-supporting, full-time university student I had very little discretionary income.
"I graduated in December and have been working full time since. Now that I have enough money to comfortably afford a Grande, you say I can't have one. Isn't a money order in the amount of US $1059 enough incentive to fill out the export/customs forms?" Allan A. Calgary Alberta
No. We promised a long time ago (#2, p.10) to (until further notice) give John Doe in Fargo, North Dakota as good a deal as anyone or any company could get. If we had to fill out a five-page form for every board sale, we would have a full-time secretary who would be a touch-typist and experienced with five-paged forms, and our prices would be higher because of the higher overhead (there is no such thing as a free lunch).
To spend time learning how to fill out the five-page form means that we would be giving you a much BETTER deal than Fargo John, and we never said we would do that - in fact, if you get a better deal, then we have broken our word with John, no? If John's Grande fails during the one-year warranty period he ships it to us UPS - cost, about $4 - and we fix it and ship it back to him - our shipping cost, about $4. If you buy a Grande and it fails during the warranty period, just exactly how do we proceed? How much does it cost? Who pays? See the problem? Incidentally, our last sale to Canada was over a year and a half ago; we found out about the new forms when a recent Canadian immigrant from West Germany placed an order - FNE
"Your publication is eagerly awaited each month by a small horde of true UNIX believers and IBM followers who decry your every remark - saying 'It can't be that way, ____ would never do that'. Then your prediction comes true again." Reginald J. Dharan Saudi Arabia
Well, SOME of them come true, Reginald - FNE
"I'm sure you saw the latest issue of Dr. Dobb's Journal, in which there was an article about a soon-to-be free UNIX look-alike operating system. This one, being written by hackers, may turn out pretty nicely. Who knows? The price, including full source code, sounds attractive. (And with the source code, you could obey the 80%/20% rule, right?) Actually, I shouldn't say 'you', since I doubt that 'you' are interested in pursuing it at all. I should say 'I', since 'I' would like very much to pursue it.
"I was a bit surprised by your fleeting comment about the 68000 Tiny BASIC in the earlier issue. A 3Kb BASIC interpreter sounds pretty exciting, until you read the part about it running (only a little) faster than Applesoft. Oh, well, like you've already said, even a 68000 isn't fast enough to run 8080 code real quickly, although apparently it can sometimes beat an 8080 if the code has been massaged more than just the minimum amount." John T. Apple Valley MN
You are correct about us not being interested in pursuing a public-domain UNIX look-alike. As long as we are running a company (which is busy and getting busier, incidentally), writing a newsletter and developing HALGOL we don't have time to pursue UNIX, whether the real thing or a clone, regardless of the associated price tag. And if we did have more time to spare, we would not want to fool with a slow, inefficient (for single-user, single-tasking) operating system even if it were free. Congratulations on your perspicacity.
We did not say much about the 68000 Tiny BASIC with all of 3K object code because 1) When we first read about it, we had less than 4 column inches total, on two different pages, to fill that issue of the newsletter, 2) We could add nothing to the Dr. Dobb's description at the time, and 3) We were personally VERY busy preparing the documentation for release v.03 of HALGOL, our own BASIC-like language with over 28K of object code!
John, do we sense that you disapprove of us just a tad? We do hope you are enjoying your HSC Z-80 based 68000 board - FNE
"What is this about CP/M-68K being slow? I have a
Compupro (now 'Viasyn') System 8/16 E (their S-100 box) with the 10MHz 68000, at present equipped with 256K bytes of static memory and 1 megabyte (2 boards) of pseudo-disk. As far as I'm concerned, it is FAST. With everything on the pseudo-volume, I can compile and link the Sieve benchmark in 21 seconds and run it in 3.6 seconds (10 iterations). Since the operating system doesn't do very much else for me, I don't know what other benchmarks would be appropriate. Oh, compile and link the Sieve with everything on hard disk (40 Mbyte Quantum): 62 seconds; run time obviously identical; I guess that means the hard disk is slow.
"So what's the fuss? It's not a great operating system by any means. Now that I have an IBM XT here too I can testify that MS-DOS is a much better piece of work. THERE is a slow computer! Same compile takes 39 seconds on the XT (also from pseudo-disk, main-memory DOS 3.0 version), and 41 seconds from hard disk. Execute is 11 seconds.
"How about the Sieve in HALGOL? I guess the 'compile' time would be zero (a pretty good ratio there!). What about run time, hmmm? I know 68000 assembler time has been claimed at 0.49 seconds in BYTE (Jan '83), but I haven't written an assembler routine to try it. That BYTE speed was supposedly at 8MHz.
"CP/M-68K is a reasonable environment for software development. I use Ed Ream's RED screen editor (sold with C source at $99; it is quite fast). The operating system costs $350 to the end user... that doesn't seem out of line to me. Comments (covering ears and crouching down)?" Ralph D. Mt. Laurel NJ
We have received several similar letters, Ralph, including the lengthy first part of John T's letter above complete with accompanying documentation - FNE
We have received a number of letters from MFC (Mostly Faithful Correspondent) Nils D. in which he takes great effort to keep us abreast of happenings in the C-64 world. Lately he has been writing us about almost getting a new programming language called PROMAL, which we understand is now available for sale over the counter. We don't know anything about that language except that the author is a DTACK subscriber and that he is hard at work on an expanded IBM version. (Bruce, can you send us a copy of an article or something describing PROMAL?)
But what really has stirred Nils to a fine lather is speculation over who REALLY might be the REAL intended customer for HALGOL - by simple arithmetic he thinks he has eliminated DTACK board owners as the intended customers. Here's what you do, Nils: go buy a paperback copy of Robert Ludlum's The MATARESE CIRCLE.
Rip out the last 20 pages of text. Now read the book, and when you get to the end imagine that the Matarese conspiracy lives on, and is the force behind HALGOL and will determine HALGOL's destiny.
Actually, we think that the availability of a really fast programming language which is inexpensively obtainable in TRUE source form will sell some boards for us - and that includes 68020 boards in the future. The folks who fork over $50,000 to Microsoft for the source code of their Z-80 BASIC get "source code" which averages one comment every 12 assembly lines. If you think that is Microsoft's "TRUE" source we have two bridges and some Florida swamp real estate we want to talk to you about!
IBM has just announced that production of the PCjr will be cut off in April. That is not completely in accordance with the facts. The facts are that production of the PCjr has long since been permanently terminated. What is being shut down in April are the MODIFICATION (not production) lines which took the old-style PCjr out of the box which Teledyne had placed it in and which installed a new keyboard and some other modifications to make the jr more expandable. Since all of the jrs which are gonna GET modified will have BEEN modified by April, the MODIFICATION line is being shut down!
The low-end jrs are NOT being modified - IBM did not even bother replacing the keyboard on that model, remember? And so those low-end jrs are destined for a bulldozed hole in a Tennessee hillside if they are not already there. IBM has raised jr's price back up as of Feb 1 and sales have naturally dropped to near-zero.
One market research firm has estimated jr's '84 sales at 240,000 copies, of which 190,000 were sold in the fourth quarter with the help of that enormous price cut, individual sales incentives and a cute color clone of MackDraw. Presumably, another 40,000 jrs were sold on the first day to lots of folks who were going to get rich writing jr software, books, magazine articles, etc. SAY! Just how many of those folks wound up getting rich?
Our guess is that IBM thinks it can dump the other 200,000 to 300,000 jrs on the shelf during the next Xmas selling season. Maybe. We think the market might have moved on by that time, and that IBM just might have to drop the price to $49.95 to move them, which naturally it will not do. Well, there are lots of hillsides in Tennessee which haven't been filled yet.
The above story is definitely relevant to the following stuff about disk drives, so pay attention:
John Dvorak and us have disagreed before over floppy disk stuff and we had in the past always been proven right, so when Dvorak asserted in his infoWorld column recently that the PC II would feature 5 1/4 inch floppies, we chortled SURE! That's why IBM is buying 500,000 3.5 inch diskettes from Sony per month, and why they have placed orders for 1.8 million 3.5 inch double-sided floppy drives with two Japanese manufacturers, and why IBM has terminated Tandon's contract to build 5 1/4 inch floppy drives! Uh, will you play a little humility music, maestro?
EET (Electronic Engineering Times) arrived the day after InfoWorld and reported that IBM was waffling over the disk drives in the PC II, and the next week EET reported that IBM was reversing course:
"According to software sources, IBM has begun having second thoughts about being able to convince business and professional customers of the soundness of the 3 1/2 inch drive, planned for many months as being the only drive in the PC successor. Insdeed [sic], sources say that IBM market research has indicated that few present PC owners are willing to sacrifice their investment in 5 1/4 inch-format software by purchasing an incompatible PC II."
So there you have it. We WERE right, but IBM did some market research and changed its mind, which is one of the reasons the PC II has not already been introduced. Prevailing rumors say look for the II in June or July. The fact that IBM is now doing market research and ACTING ON THE RESULTS OF THAT RESEARCH is bad news for its competitors - that means we will see fewer utter disasters such as that incredibly bad original jr keyboard .
On the other hand it also means that IBM will now tend to REACT TO the market rather than LEAD the market, so there will still be a place for a small company with good ideas. So we won't jump out of a 20th-floor window yet, or even jump out of our first-floor window 20 times.
HALGOL v.03 was finally shipped on Mar 4th and 5th, all 150 lbs of it! The release consists of 28 pages of documentation, printed one side for easy photocopying, and a DOS-less demo disk. Additional documentation was included in an Applesoft program called README on the demo diskette.
While the 28-page HALGOL documentation was at the printers, James S. and us took turns entering the HALGOL mailing list, using HALGOL and the QD DATA disks as discussed in the last issue. The mailing labels for the v.03 mailing were printed from arrays saved on QD
DATA disks using a HALGOL program. (So were the labels on the disks themselves, naturally.) In the process of making HALGOL do some useful work, four small bugs cropped up. By noon on Saturday, Mar 9, those bugs had been fixed and we spent that Saturday afternoon writing up those bugs and their fixes. On Tuesday, as we were about to go to the printers with the bug fixes, we received notice of yet another bug from a user. So we fixed that one too, and incorporated the bug fix in the new writeup.
On about Mar 12, we sent another three pages to the same list of folks explaining the bugs, and containing a short Applesoft program listing to permanently patch the HALGOL object files to fix the bugs, and a method of checking for correctness of the fixes afterward. The three pages also contained some notes explaining some errata in those 28 pages sent earlier. Sigh.
We are calling release v.03, which was the 150 lbs (!) of stuff we sent out free, release v.03A if the bug fixes have already been incorporated. HALGOL v.03A is now routinely included free with every Apple-compatible DTACK board we sell.
We have been spending so much time on documentation lately that we have had nightmares about turning into a big document, sort of like some of those horror Sci-Fi flicks. If you have asked us for a DTACK info pack, or know someone who has, we don't have such a pack now. The old one is out of print and was dated Nov '83 besides. We absolutely refuse to do any more documentation for awhile.
We completely understand that this has a potential negative impact on our sales, but this is how it is, folks: we have spent most of the last two days trying to figure out how to increase our production capacity. Right at the moment, we are at about 105% of capacity, and overtime is not a permanent solution to the problem...
Some folks might assume that a low-overhead outfit which is running at capacity and which requires payment on delivery (if not before) probably does not have a cash-flow problem. Some folks would be correct.
The thing is, we don't know anybody in the small computer industry who is confident that the prevailing (excellent) business conditions will continue for more than another 5 minutes. And not everybody is doing as well as us; the PC-clone makers and chip-vendors and most disk drive vendors would be pleased to have our production problems...
We are surely tired of doing documentation.
If you did not receive free HALGOL v.03 documentation, you can buy it from us. Or even the source code:
Item #1: 28 pages of printed documentation, printed on one side, plus a 3 page errata and bug fix sheet. $3.50 U.S. and Canada, $5 elsewhere.
Item #2: A DOS-less HALGOL demo disk. Requires Item #1 and a 92K minimum DTACK board for use. $10 U.S. and Canada, $12 elsewhere.
Item #3: A master QD DATA diskette for folks who are too lazy or too stupid to follow the detailed instructions for creating same which are provided as part of that 28 page document in Item #1. $10 U.S. and Canada, $12 elsewhere.
Item #4: Complete HALGOL v.03A source code, with 6 logical disks compressed onto 4 physical disks. Requires Item #1 as a part of the documentation. Requires the PHASE ZERO ASSEM68K to get an assembly listing. (The complete assembly listing on regular Z-fold 8.5 X 11 computer paper weighs 4 lbs!) $28 U.S. and Canada, $32 elsewhere.
Item #5: This is the luxury edition of the HALGOL v.03A source code. Six physical disks are provided, corresponding 1 to 1 with the six logical disks. Saves a little bit of copying back-and-forth for folks with more money than ambition. Still requires Item #1 as a part of the documentation and the PHASE ZERO ASSEM68K to get an assembly listing. $40 U.S. and Canada, $45 elsewhere.
The above prices are in U.S. funds. For delivery in California, add 6% sales tax.
PERMISSION IS HEREBY granted to anyone whomever to make unlimited copies of any part or whole of this newsletter provided a copy of THIS page, with its accompanying subscription information, is included.
THE FOLLOWING TRADEMARKS ARE ACKNOWLEDGED: Apple; II, II+, IIe, soft: ProDOS, Mackintosh?: Apple Computer Co. Anybody else need a 196th million ack, have your legal beagles send us the usual threat.
SUBSCRIPTIONS: Subscriptions are $15 for 10 issues in the U.S. and Canada (U.S. funds), or $25 for 10 issues elsewhere. Make the check payable to DTACK GROUNDED. All back issues are kept in print; write for details. The address is:
1415 E. McFadden, Ste. F
SANTA ANA CA 92705