The visibilizing analyzer

« previous post | next post »

Less than 50 years ago, this is what the future of data visualization looked like — H. Beam Piper, "Naudsonce", Analog 1962:

She had been using a visibilizing analyzer; in it, a sound was broken by a set of filters into frequency-groups, translated into light from dull red to violet paling into pure white. It photographed the light-pattern on high-speed film, automatically developed it, and then made a print-copy and projected the film in slow motion on a screen. When she pressed a button, a recorded voice said, "Fwoonk." An instant later, a pattern of vertical lines in various colors and lengths was projected on the screen.

This is in a future world with anti-gravity and faster-than-light travel.

In 1962, there were already computer-controlled CRT displays, though I think they were all vector-based (because the memory needed to control a raster display would have been so large and expensive, I guess).

For example, here's the SAGE Operator's Console:

[In keeping with another convention of the science fiction of H. Beam Piper's era, in which everybody smokes, the SAGE console does have a built-in cigarette lighter and ashtray.]

Ivan Sutherland's 1963 Sketchpad system was implemented on a

ten bit per axis electrostatic deflection system able to display spots at a maximum rate of about 100,000 per second. The coordinates of the spots which are to be seen on the display are stored in a large table so that computation and display may proceed independently.

I haven't been able to determine when the first descriptions of raster graphics devices became available. But in the 1940s, engineers at Bell Labs built an (analog) real-time spectrograph that used a fluoroscope-type screen for display, described in Potter, Kopp, and Green, Visible Speech, 1947.

So it's interesting that an imaginative writer in 1962, a dozen years after the first color television broadcasts, couldn't imagine a display for the "visibilizing analyzer" that didn't involve the automatic development and printing of "high-speed film".


  1. KeithB said,

    August 24, 2011 @ 1:29 pm

    The Uniscope was available in 1964, which was a "Glass TTY"

    [(myl) The linked page says that "Each character was individually drawn as a series of splines using technology developed for displays in military cockpits", which implies that it was a vector rather than a raster device. If it was like the later Tektronix "green screen" devices, the display storage was (as I recall) the screen itself (a "storage tube"), where images persist until erased.]

  2. R. Wright said,

    August 24, 2011 @ 1:37 pm

    Science fiction doesn't tend to age well in this respect. Look at Asimov's Foundation books; in one of them, if I recall correctly, he goes on at length about the wonders of clotheswashing devices small enough to fit in a closet.

    I wonder if the genre of science fiction (of which I'm a big fan) sometimes attracts unimaginative writers because of its relative lack of plot constraints.

  3. Faldone said,

    August 24, 2011 @ 2:23 pm

    Speaking of Asimov's Foundation, he had the scientists of a galactic empire using slide rules.

  4. peter said,

    August 24, 2011 @ 4:22 pm

    Well, Faldone, people are still using books! And air-traffic controllers with advanced electronic display systems still typically use an individual cardboard token for each plane they are managing.

  5. Bob Moore said,

    August 24, 2011 @ 4:36 pm

    According to Wikipedia, the original DEC computer, the PDP-1 produced in 1960, came with a "Type 30 vector graphics CRT display". This was the machine that Spacewar was written for, perhaps the first graphics-based computer game. In the late 1960s and early 1970s I recall that both the Stanford and MIT AI labs had raster display terminals for their time-sharing systems, but because the required memory was so expensive, the MIT AI lab display system had a shared memory pool of 1K ram chips, with not enough memory to support all the CRT displays simultaneously. I believe the Stanford system also had a shared memory pool using a different technology, but I am not certain of that.

  6. Nick Lamb said,

    August 24, 2011 @ 4:55 pm

    Polish writer and critic Stanisław Lem despaired at the poor level of scientific education among (English speaking) science fiction authors in his heyday. It's not very different now. The audience doesn't seem to much care, Hard SF (where you can expect the author intends their story to be self-consistent and not flagrantly disobey any of what seem to be the hard and fast rules of our universe) is an even smaller niche within the niche.

    [(myl) Paradoxically, I suspect that the problem is often too much technical knowledge rather than too little. The technology in Neuromancer has aged relatively well in part because William Gibson knew relatively little about the networked computing of his day. The "visibilizing analyzer" would have remained fairly plausible if Piper had left out all that description of the mechanism behind the scenes.]

  7. matthew edney said,

    August 24, 2011 @ 6:34 pm

    Might Piper's reliance on film be related not to display but to storage, and more importantly a storage medium than can be copied? But I agree in general, that there was enough technology commonly available then for easy extrapolation …

  8. vic said,

    August 24, 2011 @ 8:43 pm

    According to wikipedia, the IBM 2260, which used a raster display, was released in 1964. It describes the refresh as being done by storing the image in an electromechanical delay line

  9. vic said,

    August 24, 2011 @ 9:04 pm

    And not quite raster, but not quite vector either, the SWAC computer at UCLA (1950) could be subverted to generate bit-mapped messages and animations.

    The computer used Williams Tube memory. These were a type of cathode ray tube where bits could be written as an array dots whose charge persisted for a significant fraction of a second. The dots could be read would be read and rewritten before they faded out.

    The version used in SWAC had a phosphor so that the dots were visible. I don't remember how many dots were on the tubes for the SWAC, but the wikipedia page on Williams Tubes says that they could store 512-1024 bits per tube.

    A programmer who knew how the memory was mapped to positions on the Williams Tubes could write data to memory which on the CRT would look like text or even crude pictures. Someone wrote a program which created a scrolling message saying something like "GOOD MORNING MR ****" (it was the name of some professor, I don't remember who) along with a picture, which I think might have been a dog wagging its tail – forgive me, I saw it around 1965 or 1966, when I was in the UCLA computer club. They would run the program during things like open house for the engineering department.

  10. Ran Ari-Gur said,

    August 24, 2011 @ 11:00 pm

    I'm not sure that accurate prediction (a.k.a. "aging well") is necessarily something that science-fiction writers should strive for. I mean, if you took a very realistic novel from today and sent it 200 years back in time, its recipients would find any technological elements to be unintelligible. If you tried to modify the book to explain the technology accurately but in a way that they could understand . . . I think you'd simply fail, but even if you succeeded, I don't think you would have created a book worth reading.

  11. maidhc said,

    August 24, 2011 @ 11:19 pm

    Raster graphics would happen as soon as someone came up with an electronic way of putting text into a TV signal, wouldn't it? I can't say I know exactly when that was, though.

    There was quite a long period when raster graphics was thought of as a cheap inferior substitute for the high-quality graphics you got with a vector display. I had someone make that point to me just a couple of days ago, although I suppose you might classify him along with people who still prefer reel-to-reel tape to digital audio.

  12. Brian said,

    August 24, 2011 @ 11:34 pm

    It really is shocking how often SF writers of the "Golden Age" demonstrated unawareness of the current state of technology already achieved. But a single person can only track so much.

  13. maidhc said,

    August 25, 2011 @ 1:39 am

    I was watching a episode of Star Trek with my stepson a few years ago. It was about how a space mine got stuck in the side of the ship and they had to send someone out to defuse it. I commented "They use robots to defuse bombs right now. Why is it that centuries in the future they have to send a guy out in a space suit?"

    His answer: "If they let you write the scripts, there would be no drama in the show!"

  14. John Roth said,

    August 25, 2011 @ 4:48 am

    I don't see any evidence in that passage that the sound sample was put into any kind of computer device in the first place. It seems like a rather complicated mechanical device. For a discussion of raster vs vector displays to make sense, I suspect that you have to assume the sound sample was digitized and then analyzed with various algorithms.

    [(myl) Not really — an old-fashioned oscilloscope is (or can be treated as) an analog vector device, and an old-fashioned television is certainly a raster device. And as I noted, engineers at Bell Labs built real-time analog spectrum analyzers with fluoroscope-like displays back in the 1940s, as part of an experiment to see whether people could learn to understand speech from visual displays.

    However, in this particular case, there do appear to be some things called "computers" around. And maybe the device shown in this illustration (from the original) is just an analog "picture phone":

    But I assumed, perhaps incorrectly, that this is the artist's representation of a computer console.

    (By the way, can anyone tell me what those shoulder-flares on retro-futurist costumes are called? And what they were assumed to be for?)]

  15. Faldone said,

    August 25, 2011 @ 5:40 am

    Golden Age SF always had to explain how things worked. The equivalent in a mid-nineteenth century story about the mid-twentieth would include lines like: "You know, of course, my dear Happleworth, that when we change the position of this lever, electrick contacts are made and the current is allowed to flow through the conductors, the phlogiston particles in the tube are excited and light is produced."

    We've gotten beyond that and writers can say things like: "He inserted the data cube in the reader and did a quick integration."

    And yes, we still read books, for now, but then we don't have a galactic empire yet.

  16. Alon Lischinsky said,

    August 25, 2011 @ 6:45 am

    Obligatory TVTropes reference. (This troper trakes no responsibility for the potential productivity loss.)

  17. KeithB said,

    August 25, 2011 @ 8:44 am

    The first section under "comments" in this movie review is appropriate here:

    Start Quote:
    For several glorious decades, movie scientists were happy to toil away on bat re-bigulators, tissue enphosphorators, animal humanifiers and romantic triangle disentangulators; until one day, some interfering little creep (probably the same one who pointed out the design flaw in the Emperor’s snazzy new outfit) felt compelled to ask, “Yes, but what’s the point?”

    And then it all stopped. Movie Science had its feelings hurt. And it reacted by becoming sensible. And, all too often, let’s face it – dull. So, although a large part of me yearns to see science depicted accurately on the screen, there’s another part of me – small, but surprisingly vocal – that mourns for the days of Seriously Silly Science. This is why a film like Bats – otherwise, a depressingly predictable little effort – can send me into a swoon of giddy delight, by producing a Mad Scientist who, when someone has the temerity to question his plan for creating giant, omnivorous, killer Chiroptera, simply raises his eyebrows in a puzzled way and says, “I’m a scientist. That’s what we do.”

    end quote

  18. ENKI-][ said,

    August 25, 2011 @ 9:15 am

    @maidhc Raster *is* technically inferior to vector in terms of potential image quality, but vector has low upper limits on the complexity of forms. If you are drawing sprites made of geometric figures, or you are plotting graphs, vector displays are much better. If you are drawing photographs, raster is better.

    The thing is, raster graphics at a certain point became cheaper, and when the big advances in computer graphics began being made by thirteen year olds on their ataris as opposed to thirty-one year olds on their custom-built coin-op machines, raster became of central importance.

  19. C12VT said,

    August 25, 2011 @ 9:15 am

    There are two reasons for this phenomenon. One is the limits of our ability (even the most thoughtful among us) to predict how technology will change.

    The other is that the fiction author's primary job is not to predict the future, but to tell a good story. Wowing the audience with shiny new tech can be part of this, but at the same time, the story, setting and characters have to remain relatable.

    Also, if an author wants to show the impact of one particular new technology, introducing twenty other new technologies could muddy the waters. Exploring every single element of future-tech (and it's second order effects – our society is different today because of TV and the internet; other technological changes would have similar wide-reaching influence) isn't always the best narrative choice.

  20. Dan T. said,

    August 25, 2011 @ 9:53 am

    The SyFy channel (formerly "SciFi", but the marketing types got to it with a silly cutesy spelling) has been doing some intentionally-cheesy films like Mega Python vs. Gatoroid with unabashedly silly science.

  21. SharonZ said,

    August 25, 2011 @ 11:41 am

    I recently re-read all seven of Asimov’s Foundation books, and what I was most struck by was his treatment of smoking. In a time many millennia from now, people are still using cigars and cigarettes and the futuristic nod is that desks and tables have “atom flash” disposals for “dead tobacco” (butts).
    And, just as you were writing this entry about an anachronistic futurism, I was reading a novel (written in 1991) where an “artificial intelligence voice address system” was instructing descendants (who had lost all technology) of the planetary settlers from Earth how to put together some computers whose components had survived. The “computer cards” went into “slots” in “plastic boxes.” The character “went over the chips, circuit by circuit, scrutinizing resistors and capacitors” and was instructed to “be sure that each card is seated securely in the grooves.” This was equipment brought by humans capable of interplanetary travel. Guess they never figured out the whole quantum computer thing. Fortunately, the rest of the story did not suffer from the same lack of imagination.

  22. Mr Fnortner said,

    August 25, 2011 @ 2:56 pm

    A few thoughts: It was not (is not) unusual for cameras to be affixed to computer screens via a hood or snout so that still or motion pictures of the display could be recorded. The author's desire to put the output on film is reasonable in that light. In the sixties our mainframe computers had consoles whose beam was directed to pass through one character at a time in a plate containing "all" the characters before striking the screen. The resulting characters were a sort of hybrid–vector-looking characters from a raster. And finally, there might be a tendency to elaborate on technology by an author who wished to convey the he or she has done enough research, or is clever enough, to be credible. There are many pitfalls in this approach, not unlike the trouble awaiting a liar who gives too much detail in an otherwise unremarkable (though false) tale.

    [(myl) Polaroid color film was introduced in 1963, so in that sense Piper's 1962 story was forward-looking.]

  23. Matt McIrvin said,

    August 27, 2011 @ 12:50 pm

    Hari Seldon in _Foundation_ did have a pretty nice desk calculator with symbolic algebra capabilities. Asimov was a little ahead of the curve there.

    In _The Moon is a Harsh Mistress_, the protagonist is gobsmacked at the idea that Mike the artificially intelligent supercomputer can generate a raster image simulating an animated human face in real time–not so much at the realism displayed, as that Mike is even capable of doing the calculations to generate the required number of pixels per frame.

  24. Matt McIrvin said,

    August 27, 2011 @ 12:50 pm

    (That last was by Heinlein.)

RSS feed for comments on this post