Discussion:
What Makes an Architecture Bizarre?
(too old to reply)
Quadibloc
2013-02-28 13:26:02 UTC
Permalink
This NASA video on YouTube,



shows what is apparently a maintenance console on a Control Data 3x00
computer at around 1:25 into the movie.

The normal console, though, doesn't quite match that of a 3400, 3600,
or 3800.

Anyways, in a search for more information, I came to a Wikipedia page
with a link to a PDF of a lecture from a series about "bizarre
architectures", which covered the Control Data 3600 and 6600. They
apparently were bizarre because they had 6-bit characters instead of 8-
bit characters.

I'll have to say that I found that hyperbolic.

John Savard
n***@cam.ac.uk
2013-02-28 13:39:16 UTC
Permalink
Post by Quadibloc
Anyways, in a search for more information, I came to a Wikipedia page
with a link to a PDF of a lecture from a series about "bizarre
architectures", which covered the Control Data 3600 and 6600. They
apparently were bizarre because they had 6-bit characters instead of 8-
bit characters.
I'll have to say that I found that hyperbolic.
6 bit characters were fairly widespread - the ICL 1900 had them, too,
in several variants.

The bizarrest ones I ever saw were ones that packed five 7-bit
characters. I once saw some program code that had been converted
from a (w.l.o.g.) 8-bit machine as binary and then written out
as characters. Problem: get it working on the original system,
starting from the printout. Not that hard, as it was Fortran with
very few multi-digit numbers.


Regards,
Nick Maclaren.
Morten Reistad
2013-02-28 13:47:06 UTC
Permalink
Post by n***@cam.ac.uk
Post by Quadibloc
Anyways, in a search for more information, I came to a Wikipedia page
with a link to a PDF of a lecture from a series about "bizarre
architectures", which covered the Control Data 3600 and 6600. They
apparently were bizarre because they had 6-bit characters instead of 8-
bit characters.
I'll have to say that I found that hyperbolic.
6 bit characters were fairly widespread - the ICL 1900 had them, too,
in several variants.
The bizarrest ones I ever saw were ones that packed five 7-bit
characters. I once saw some program code that had been converted
from a (w.l.o.g.) 8-bit machine as binary and then written out
as characters. Problem: get it working on the original system,
starting from the printout. Not that hard, as it was Fortran with
very few multi-digit numbers.
7-bit ascii packet 5 to a (36-bit) word BIZARRE? That is how
it is, on a PDP10. Some of us still run tops20, you know.

-- mrr
unknown
2013-02-28 15:10:33 UTC
Permalink
Post by n***@cam.ac.uk
6 bit characters were fairly widespread - the ICL 1900 had them, too,
in several variants.
Unisys had both 6 and 8 (or 9?) bit chars afair.
Post by n***@cam.ac.uk
The bizarrest ones I ever saw were ones that packed five 7-bit
characters. I once saw some program code that had been converted
from a (w.l.o.g.) 8-bit machine as binary and then written out
as characters. Problem: get it working on the original system,
starting from the printout. Not that hard, as it was Fortran with
very few multi-digit numbers.
Please explain:

Did you have the original source code stored as 5 letters per 36 bit
word and 10 letters in 72 bits, then those 72 bits had been split into 9
8-bit bytes in the printout?

Terje
--
- <Terje.Mathisen at tmsw.no>
"almost all programming can be viewed as an exercise in caching"
n***@cam.ac.uk
2013-02-28 16:32:40 UTC
Permalink
Post by unknown
Post by n***@cam.ac.uk
The bizarrest ones I ever saw were ones that packed five 7-bit
characters. I once saw some program code that had been converted
from a (w.l.o.g.) 8-bit machine as binary and then written out
as characters. Problem: get it working on the original system,
starting from the printout. Not that hard, as it was Fortran with
very few multi-digit numbers.
Did you have the original source code stored as 5 letters per 36 bit
word and 10 letters in 72 bits, then those 72 bits had been split into 9
8-bit bytes in the printout?
I no longer remember! All I can remember is that one character in
4 or 5 had lost a bit. It was done by some of my colleagues, and
they showed it to me as someone who might be amused.


Regards,
Nick Maclaren.
unknown
2013-02-28 19:47:31 UTC
Permalink
Post by n***@cam.ac.uk
Post by unknown
Post by n***@cam.ac.uk
The bizarrest ones I ever saw were ones that packed five 7-bit
characters. I once saw some program code that had been converted
from a (w.l.o.g.) 8-bit machine as binary and then written out
as characters. Problem: get it working on the original system,
starting from the printout. Not that hard, as it was Fortran with
very few multi-digit numbers.
Did you have the original source code stored as 5 letters per 36 bit
word and 10 letters in 72 bits, then those 72 bits had been split into 9
8-bit bytes in the printout?
I no longer remember! All I can remember is that one character in
4 or 5 had lost a bit. It was done by some of my colleagues, and
they showed it to me as someone who might be amused.
OK, I see what happened!

You had a binary (effectively) file with 36-bit words, it had been
printed as 5 7-bit chars/octals per word, dropping either the first or
last bit.

This meant that you had to guess what that last bit should be, and since
you could recover it was probably located as the top bit of the
corresponding byte/char.

(If it had been the bottom bit then you would have had no way to figure
out if a digit was 0 or 1, 2 or 3 etc, and those are very common in a
fortran source code.)

Terje
--
- <Terje.Mathisen at tmsw.no>
"almost all programming can be viewed as an exercise in caching"
Stephen Fuld
2013-02-28 16:44:28 UTC
Permalink
Post by unknown
Post by n***@cam.ac.uk
6 bit characters were fairly widespread - the ICL 1900 had them, too,
in several variants.
Unisys had both 6 and 8 (or 9?) bit chars afair.
Yes. Well, actually only the Univac part of Unisys. It was (and its
descendants still are) a 36 bit word oriented machine. That is, each
memory address referred to a 36 bit word. To access "characters", the
hardware provided a field within the instructions that allowed access to
"partial words". The original systems supported 6 bit characters, as
was common in that era. One accessed them by specifying which "sixth"
of the word one wanted to deal with. So, for example, you could load
"s1", the first sixth, which loaded the high order six bits of a 36 bit
word in memory into a register. There is also the ability to deal with
other partial words, thirds (12 bits) and halves (18 bits) all on the
natural boundaries.

Note that through some innovative engineering, there was no need for
read before write to store partial words (at least until the core was
replaced with DRAM and ECC was added. But by then the software expected
the hardware to "just do it", so you could do a partial word store and
the software would do the RBW behind the scenes for you.

Later, when it became important to support ASCII, the designers faced
the problem of how to provide support in a backward compatible way. The
came pretty close by adding support for dealing with "quarter words".
They stored an eight bit character in a nine bit field, with the high
order bit not used. They added additional hardware partial word
designators to allow dealing with the quarter words (Q1-Q4). But lots
of existing software still dealt with things in the original 6 bit code,
so the hardware supported both sixths and quarters of the same 36 bit
word. And so it exists still today.

I have often told Unisys (and its predecessor companies) management that
their biggest failure was their inability to convince the world that 36
was an integral power of 2. :-(
--
- Stephen Fuld
(e-mail address disguised to prevent spam)
h***@bbs.cpcn.com
2013-02-28 19:45:42 UTC
Permalink
Post by Stephen Fuld
Later, when it became important to support ASCII, the designers faced
the problem of how to provide support in a backward compatible way.  The
came pretty close by adding support for dealing with "quarter words".
They stored an eight bit character in a nine bit field, with the high
order bit not used.  They added additional hardware partial word
designators to allow dealing with the quarter words (Q1-Q4).  But lots
of existing software still dealt with things in the original 6 bit code,
so the hardware supported both sixths and quarters of the same 36 bit
word.  And so it exists still today.
In April 1964 Western Union analyzed the then new ASCII code. They
saw it as seven bits, not eight (see link below pg 55). From a
communications they found it objectionable because (1) seven bits
would be harder for staff to remember than five bits and (2) the codes
have many 'one-bits' which would result in increased wear on
reperforators, relays, and transmitters.

http://massis.lcs.mit.edu/telecom-archives/archives/technical/western-union-tech-review/18-2/p050.htm
Shmuel (Seymour J.) Metz
2013-02-28 17:08:32 UTC
Permalink
Post by unknown
Unisys had both 6 and 8 (or 9?) bit chars afair.
Burroughs used 6 on the B5000 series and 4, 6 or 8 on the B6500 line.

UNIVAC's 1108 (36 bit) line used 6 and 9; their 32-bit machines used
8. The 30-bit line and the older 36-bit line used 6. I believe the the
UNIVAC III also used 6.

AFAIK only the U1108 and B6500 lines are still supported; I don't know
whether Unisys initially carried other lines forward.
--
Shmuel (Seymour J.) Metz, SysProg and JOAT <http://patriot.net/~shmuel>

Unsolicited bulk E-mail subject to legal action. I reserve the
right to publicly post or ridicule any abusive E-mail. Reply to
domain Patriot dot net user shmuel+news to contact me. Do not
reply to ***@library.lspace.org
Peter Flass
2013-02-28 20:18:09 UTC
Permalink
Post by Shmuel (Seymour J.) Metz
Post by unknown
Unisys had both 6 and 8 (or 9?) bit chars afair.
Burroughs used 6 on the B5000 series and 4, 6 or 8 on the B6500 line.
UNIVAC's 1108 (36 bit) line used 6 and 9;
6, 7, and 9 IIRC. Talk about weird architectures - they supported three
different character sets "Fielddata", which was originally used for
miltitary applications, and, I believe, both 7 and 8 bit ASCII. (Univac
1108).
--
Pete
Stephen Fuld
2013-02-28 21:35:35 UTC
Permalink
Post by Peter Flass
Post by Shmuel (Seymour J.) Metz
Post by unknown
Unisys had both 6 and 8 (or 9?) bit chars afair.
Burroughs used 6 on the B5000 series and 4, 6 or 8 on the B6500 line.
UNIVAC's 1108 (36 bit) line used 6 and 9;
6, 7, and 9 IIRC. Talk about weird architectures - they supported three
different character sets "Fielddata", which was originally used for
miltitary applications, and, I believe, both 7 and 8 bit ASCII. (Univac
1108).
AFAICR the 1108 never supported 7 bit ASCII. But yes, the six bit code
was Fieldata, not BCD.
--
- Stephen Fuld
(e-mail address disguised to prevent spam)
unknown
2013-03-01 06:59:28 UTC
Permalink
Post by Stephen Fuld
Post by Peter Flass
Post by Shmuel (Seymour J.) Metz
Post by unknown
Unisys had both 6 and 8 (or 9?) bit chars afair.
Burroughs used 6 on the B5000 series and 4, 6 or 8 on the B6500 line.
UNIVAC's 1108 (36 bit) line used 6 and 9;
6, 7, and 9 IIRC. Talk about weird architectures - they supported three
different character sets "Fielddata", which was originally used for
miltitary applications, and, I believe, both 7 and 8 bit ASCII. (Univac
1108).
AFAICR the 1108 never supported 7 bit ASCII. But yes, the six bit code
was Fieldata, not BCD.
The only reason I remembered that was because I learned to program (in
Fortran 2) on an 110x using Fielddata.

The bitness of the chars really didn't matter to me though, since the
only way I had to handle characters was as integers, and print them with
fixed-size '27Hxxxxx' format specifiers...

Not 'The Good Old Days'.

Terje
--
- <Terje.Mathisen at tmsw.no>
"almost all programming can be viewed as an exercise in caching"
Ivan Godard
2013-03-01 07:13:46 UTC
Permalink
Post by unknown
The only reason I remembered that was because I learned to program (in
Fortran 2) on an 110x using Fielddata.
The bitness of the chars really didn't matter to me though, since the
only way I had to handle characters was as integers, and print them with
fixed-size '27Hxxxxx' format specifiers...
Not 'The Good Old Days'.
Au contraise mon frere. I did the first Mary compiler on the 1108 at NTH
Regnecentralen, in Algol 60 and fielddata targeting the Kongsberg SM-4
and Nord-1. One of the happier times of my life :-)

Ivan
n***@cam.ac.uk
2013-03-01 08:21:13 UTC
Permalink
Post by Ivan Godard
Post by unknown
The only reason I remembered that was because I learned to program (in
Fortran 2) on an 110x using Fielddata.
The bitness of the chars really didn't matter to me though, since the
only way I had to handle characters was as integers, and print them with
fixed-size '27Hxxxxx' format specifiers...
Not 'The Good Old Days'.
Au contraise mon frere. I did the first Mary compiler on the 1108 at NTH
Regnecentralen, in Algol 60 and fielddata targeting the Kongsberg SM-4
and Nord-1. One of the happier times of my life :-)
Yes. Fortran was appalling for character handling before Fortran 77,
but there were a LOT of better languages.


Regards,
Nick Maclaren.
unknown
2013-03-01 10:46:19 UTC
Permalink
Post by Ivan Godard
Post by unknown
The only reason I remembered that was because I learned to program (in
Fortran 2) on an 110x using Fielddata.
The bitness of the chars really didn't matter to me though, since the
only way I had to handle characters was as integers, and print them with
fixed-size '27Hxxxxx' format specifiers...
Not 'The Good Old Days'.
Au contraise mon frere. I did the first Mary compiler on the 1108 at NTH
Regnecentralen, in Algol 60 and fielddata targeting the Kongsberg SM-4
and Nord-1. One of the happier times of my life :-)
So we used the very same 1108 then!

NTH changed name to NTNU when they merged the technical institute with
the rest of university-level colleges in Trondheim, this happened
shortly after my time.

(BTW, I have been exposed to Mary just once, in the form of an OS exam I
took after graduating, while working at SINTEF.)

Terje
--
- <Terje.Mathisen at tmsw.no>
"almost all programming can be viewed as an exercise in caching"
Shmuel (Seymour J.) Metz
2013-02-28 23:23:43 UTC
Permalink
Post by Peter Flass
6, 7, and 9 IIRC.
No. The j field[1] can specify a word, one of two 18-bit bytes, one of
three 12-bit bytes, one of four 9-bit bytes or one of six 6-bit bytes.
Post by Peter Flass
they supported three different character sets
The character set is not part of the architecture. I consider the
UNIVAC 1108 to be rather conventional compared to, e.g., Bendix G-20,
RCA 601.

[1] Possibly modified by the quarter word designator.
--
Shmuel (Seymour J.) Metz, SysProg and JOAT <http://patriot.net/~shmuel>

Unsolicited bulk E-mail subject to legal action. I reserve the
right to publicly post or ridicule any abusive E-mail. Reply to
domain Patriot dot net user shmuel+news to contact me. Do not
reply to ***@library.lspace.org
Scott Lurndal
2013-02-27 21:20:56 UTC
Permalink
Post by Shmuel (Seymour J.) Metz
Post by unknown
Unisys had both 6 and 8 (or 9?) bit chars afair.
Burroughs used 6 on the B5000 series and 4, 6 or 8 on the B6500 line.
and 8 on the B3500 (medium systems) line. FWIW, B5000 is considered
an ancestor of the B6500 (with the B5500 derived from the B5000 and
the B6500 a tredy-designed larger B5500) (I supposed like the PDP-6 and PDP-10?).

Thereafter, the large systems designs alternated between southern cal and
tredyffrin Pa.
Post by Shmuel (Seymour J.) Metz
UNIVAC's 1108 (36 bit) line used 6 and 9; their 32-bit machines used
8. The 30-bit line and the older 36-bit line used 6. I believe the the
UNIVAC III also used 6.
AFAIK only the U1108 and B6500 lines are still supported; I don't know
whether Unisys initially carried other lines forward.
The merger was in 86. The V-series (nee Medium Systems) line was discontinued
around 1991 (and support EOL'd 12/31/1999). The small systems (B1900) line
had been discontinued prior to the merger. CP9500 lived on for a while, but
probably not much past 1990.

The System/80 line on the Sperry side was discontinued right after the
merger, if I recall correctly.

Two lines still exist, one based on the Sperry 2200 (Clearpath Dorado) and the other
on the Burroughs A-Series (Clearpath Libra) (A-series were followons to the B[567]900).

scott
Ivan Godard
2013-02-28 22:34:26 UTC
Permalink
Post by Scott Lurndal
Post by Shmuel (Seymour J.) Metz
Post by unknown
Unisys had both 6 and 8 (or 9?) bit chars afair.
Burroughs used 6 on the B5000 series and 4, 6 or 8 on the B6500 line.
and 8 on the B3500 (medium systems) line. FWIW, B5000 is considered
an ancestor of the B6500 (with the B5500 derived from the B5000 and
the B6500 a tredy-designed larger B5500) (I supposed like the PDP-6 and PDP-10?).
Thereafter, the large systems designs alternated between southern cal and
tredyffrin Pa.
Nope, you're thinking of the B8500 which was a Pennsylvania disaster.
The B6500 was done in Pasadena. I was there :-)
Post by Scott Lurndal
Post by Shmuel (Seymour J.) Metz
UNIVAC's 1108 (36 bit) line used 6 and 9; their 32-bit machines used
8. The 30-bit line and the older 36-bit line used 6. I believe the the
UNIVAC III also used 6.
AFAIK only the U1108 and B6500 lines are still supported; I don't know
whether Unisys initially carried other lines forward.
The merger was in 86. The V-series (nee Medium Systems) line was discontinued
around 1991 (and support EOL'd 12/31/1999). The small systems (B1900) line
had been discontinued prior to the merger. CP9500 lived on for a while, but
probably not much past 1990.
The System/80 line on the Sperry side was discontinued right after the
merger, if I recall correctly.
Two lines still exist, one based on the Sperry 2200 (Clearpath Dorado) and the other
on the Burroughs A-Series (Clearpath Libra) (A-series were followons to the B[567]900).
scott
Shmuel (Seymour J.) Metz
2013-02-28 23:39:28 UTC
Permalink
Post by Ivan Godard
Nope, you're thinking of the B8500 which was a Pennsylvania disaster.
When record companies were sending Paoli to radio stations to play
their songs <g, d & r>.
--
Shmuel (Seymour J.) Metz, SysProg and JOAT <http://patriot.net/~shmuel>

Unsolicited bulk E-mail subject to legal action. I reserve the
right to publicly post or ridicule any abusive E-mail. Reply to
domain Patriot dot net user shmuel+news to contact me. Do not
reply to ***@library.lspace.org
h***@bbs.cpcn.com
2013-03-01 02:43:37 UTC
Permalink
Post by Scott Lurndal
Thereafter, the large systems designs alternated between southern cal and
tredyffrin Pa.
I was about to ask if that arrangement wasn't very inefficient. But
then I recalled that IBM assigned one model of System/360 to an
English town (Helmsley?) Somewhere I read that IBM set up an overseas
private line to interconnect its locations. Back then that was
enormously expensive.

BTW, Tredyffrin PA is fairly close to Blue Bell, PA, where Univac's HQ
was, about a 20 minute drive. These are all suburbs of
Philadelphia. IIRC, both pre-merger Univac and Burroughs had several
office locations in that area, including Paoli, PA, and King of
Prussia, PA.


IIRC, RCA divided work between Cherry Hill, NJ, and Boston, sometimes
swapping assignments, which did not help its situation.
Shmuel (Seymour J.) Metz
2013-02-28 23:31:42 UTC
Permalink
Post by Scott Lurndal
and 8 on the B3500 (medium systems) line. FWIW, B5000 is
considered an ancestor of the B6500
FSVO ancestor.
Post by Scott Lurndal
(with the B5500 derived from the B5000 and
the B6500 a tredy-designed larger B5500)
Not even close. The B5000, B5500 and B5700 have a common architecture;
the B6500 is a totally new design.

What was the fate of the UNIVAC 490 line?
--
Shmuel (Seymour J.) Metz, SysProg and JOAT <http://patriot.net/~shmuel>

Unsolicited bulk E-mail subject to legal action. I reserve the
right to publicly post or ridicule any abusive E-mail. Reply to
domain Patriot dot net user shmuel+news to contact me. Do not
reply to ***@library.lspace.org
Stephen Fuld
2013-03-01 07:34:36 UTC
Permalink
Post by Shmuel (Seymour J.) Metz
Post by Scott Lurndal
and 8 on the B3500 (medium systems) line. FWIW, B5000 is
considered an ancestor of the B6500
FSVO ancestor.
Post by Scott Lurndal
(with the B5500 derived from the B5000 and
the B6500 a tredy-designed larger B5500)
Not even close. The B5000, B5500 and B5700 have a common architecture;
the B6500 is a totally new design.
What was the fate of the UNIVAC 490 line?
Univac had pretty much phased them out by the mid 1970s (though a few
remained). The 1100/80 had a mode to emulate the 494, though I never
heard of anyone using it.

BTW, Univac also had the 418 series, which had 18 bit words. The only
use I know of for them was by Western Union as front ends for their
telegram systems. They were gone even before the 490s.
--
- Stephen Fuld
(e-mail address disguised to prevent spam)
Morten Reistad
2013-02-28 15:59:21 UTC
Permalink
Post by unknown
Post by n***@cam.ac.uk
6 bit characters were fairly widespread - the ICL 1900 had them, too,
in several variants.
Unisys had both 6 and 8 (or 9?) bit chars afair.
Post by n***@cam.ac.uk
The bizarrest ones I ever saw were ones that packed five 7-bit
characters. I once saw some program code that had been converted
from a (w.l.o.g.) 8-bit machine as binary and then written out
as characters. Problem: get it working on the original system,
starting from the printout. Not that hard, as it was Fortran with
very few multi-digit numbers.
Did you have the original source code stored as 5 letters per 36 bit
word and 10 letters in 72 bits, then those 72 bits had been split into 9
8-bit bytes in the printout?
The text storage on pdp10s use 5 ascii (7-bit) characters in one 36-bit
word. Thank $DEITY for the ildb/idpb instructions so a

for(cp=buffer,dp=dest;*cp;*cp++=*dp++);

loop worked efficiently.

The byte pointer can have any byte size from 1 to 36. The core of
a loop converting from any to any byte size via a lookup table
takes all of four instructions. This makes it possoble to handle
all of these various formats (except radix50, which requires a
mul/div loop) handsomely.

So, yes, any "normal" text file on a pdp10 has 5 bytes to the word,
plus one discarded bit. This "discarded" bit is used to keep tag
words, like line numbering, in source code; and words with this
bit set is ingored/used by editors and compilers.

The printout also kept 7-bit bytes. Remember, the architecture was
officially cancelled May 17th, 1983. That is 30 years ago this
year.

-- mrr
showing my age.
Dan Espen
2013-02-28 20:56:13 UTC
Permalink
What makes an architecture bizarre? Meetings.
--
Dan Espen
Shmuel (Seymour J.) Metz
2013-02-28 15:21:44 UTC
Permalink
Post by n***@cam.ac.uk
The bizarrest ones I ever saw were ones that packed five 7-bit
characters.
Why is it bizarre for a 36-bit machine to store 5 ASCII characters in
a word? The alternative of storing 4 9-bit bytes in the word wastes
space, since ASCII is only 7 bits.
--
Shmuel (Seymour J.) Metz, SysProg and JOAT <http://patriot.net/~shmuel>

Unsolicited bulk E-mail subject to legal action. I reserve the
right to publicly post or ridicule any abusive E-mail. Reply to
domain Patriot dot net user shmuel+news to contact me. Do not
reply to ***@library.lspace.org
Stephen Fuld
2013-02-28 16:51:35 UTC
Permalink
Post by Shmuel (Seymour J.) Metz
Post by n***@cam.ac.uk
The bizarrest ones I ever saw were ones that packed five 7-bit
characters.
Why is it bizarre for a 36-bit machine to store 5 ASCII characters in
a word? The alternative of storing 4 9-bit bytes in the word wastes
space, since ASCII is only 7 bits.
I wouldn't go as far as bizarre, but it is unusual in that it is
irregular. How did one handle arbitrary length "strings" of ASCII
characters. Did you have to do special coding to skip a bit every five
characters? Was there specific hardware support for this format to make
things easier?
--
- Stephen Fuld
(e-mail address disguised to prevent spam)
Raymond Wiker
2013-02-28 17:27:53 UTC
Permalink
Post by Stephen Fuld
Post by Shmuel (Seymour J.) Metz
Post by n***@cam.ac.uk
The bizarrest ones I ever saw were ones that packed five 7-bit
characters.
Why is it bizarre for a 36-bit machine to store 5 ASCII characters in
a word? The alternative of storing 4 9-bit bytes in the word wastes
space, since ASCII is only 7 bits.
I wouldn't go as far as bizarre, but it is unusual in that it is
irregular. How did one handle arbitrary length "strings" of ASCII
characters. Did you have to do special coding to skip a bit every
five characters? Was there specific hardware support for this format
to make things easier?
The PDP-10 had the bit-field operations ldb and dpb (which live on in
Common Lisp), and (probably) also a set of string processing functions
implemented in hardware.

Note that packing 5 7-bit characters to a 36-bit word is not *that*
much more complex than packing 4 8-bit characters in a 32-bit word -
they both require a little care to get good performance from string
processing.
n***@cam.ac.uk
2013-02-28 17:47:28 UTC
Permalink
Post by Raymond Wiker
Post by Stephen Fuld
Post by Shmuel (Seymour J.) Metz
Post by n***@cam.ac.uk
The bizarrest ones I ever saw were ones that packed five 7-bit
characters.
Why is it bizarre for a 36-bit machine to store 5 ASCII characters in
a word? The alternative of storing 4 9-bit bytes in the word wastes
space, since ASCII is only 7 bits.
I wouldn't go as far as bizarre, but it is unusual in that it is
irregular. How did one handle arbitrary length "strings" of ASCII
characters. Did you have to do special coding to skip a bit every
five characters? Was there specific hardware support for this format
to make things easier?
The PDP-10 had the bit-field operations ldb and dpb (which live on in
Common Lisp), and (probably) also a set of string processing functions
implemented in hardware.
Note that packing 5 7-bit characters to a 36-bit word is not *that*
much more complex than packing 4 8-bit characters in a 32-bit word -
they both require a little care to get good performance from string
processing.
But there are other wrinkles, too. Remember the date. Fortran
was the dominant scientific programming language, and had no
character type before 1977. Handling characters in integers is
tricky enough, without adding that.


Regards,
Nick Maclaren.
Quadibloc
2013-02-28 19:00:29 UTC
Permalink
But there are other wrinkles, too.  Remember the date.  Fortran
was the dominant scientific programming language, and had no
character type before 1977.  Handling characters in integers is
tricky enough, without adding that.
Fortran programs would be stuck with six-bit characters until somebody
gave them new assembly-language subroutines to use the new longer
characters instead.

John Savard
h***@bbs.cpcn.com
2013-02-28 18:52:25 UTC
Permalink
Post by Stephen Fuld
Post by Shmuel (Seymour J.) Metz
Post by n***@cam.ac.uk
The bizarrest ones I ever saw were ones that packed five 7-bit
characters.
Why is it bizarre for a 36-bit machine to store 5 ASCII characters in
a word? The alternative of storing 4 9-bit bytes in the word wastes
space, since ASCII is only 7 bits.
I wouldn't go as far as bizarre, but it is unusual in that it is
irregular.  How did one handle arbitrary length "strings" of ASCII
characters.  Did you have to do special coding to skip a bit every five
characters?  Was there specific hardware support for this format to make
things easier?
Quadibloc
2013-02-28 18:59:19 UTC
Permalink
Post by Stephen Fuld
I wouldn't go as far as bizarre, but it is unusual in that it is
irregular.  How did one handle arbitrary length "strings" of ASCII
characters.  Did you have to do special coding to skip a bit every five
characters?  Was there specific hardware support for this format to make
things easier?
The fact that five ASCII characters left a bit unused in a 36-bit word
would not, of itself, have required additional special coding. In
general, characters tended not to get special support in computers of
that vintage, and so one has to advance to the next word after every
few characters, and move through the word character by character.

It's not as if bit-field instructions were common in computers of that
vintage. They would have been scientific, not commercial, computers.

John Savard
Stephen Fuld
2013-02-28 20:22:31 UTC
Permalink
Post by Quadibloc
Post by Stephen Fuld
I wouldn't go as far as bizarre, but it is unusual in that it is
irregular. How did one handle arbitrary length "strings" of ASCII
characters. Did you have to do special coding to skip a bit every five
characters? Was there specific hardware support for this format to make
things easier?
The fact that five ASCII characters left a bit unused in a 36-bit word
would not, of itself, have required additional special coding. In
general, characters tended not to get special support in computers of
that vintage, and so one has to advance to the next word after every
few characters, and move through the word character by character.
Well, the 1108 was released in about 1964, far before the PDP 10 and it
had some hardware support for characters (see my earlier post). Even
though it had that support, and the encoding was more "regular") dealing
with arbitrary length strings was messy. But if you had arbitrary bit
extract/insert instructions as Raymond Wiker has posted, then it
wouldn't bee so bad. But again, the PDP 10 was long after the 1108.
Post by Quadibloc
It's not as if bit-field instructions were common in computers of that
vintage. They would have been scientific, not commercial, computers.
BTW, the 1108s I worked on in the 1970s were mostly programmed in COBOL,
not Fortran, but that's a different story.
--
- Stephen Fuld
(e-mail address disguised to prevent spam)
h***@bbs.cpcn.com
2013-02-28 19:45:48 UTC
Permalink
Post by Stephen Fuld
I wouldn't go as far as bizarre, but it is unusual in that it is
irregular.  How did one handle arbitrary length "strings" of ASCII
characters.  Did you have to do special coding to skip a bit every five
characters?  Was there specific hardware support for this format to make
things easier?
On the IBM 14xx line, which was intended for business processing and
had character addressing, there was an additional bit on every
character called a word mark. This would be set to define a boundary
of a string of characters.

From the 1401 General Information manual (see link below, pg 9):

Variable Word Length

Certain stored-program machines, of which the 1401
is an example, have a desirable characteristic called
"variable word length." The ability to have grouped
together any number of storage positions to accommodate
fields of any size, gives rise to this term. The
fact that a machine has variable word length allows
efficient use of the storage area.

It is recalled that on both the 604 and 602 [calculators] each
storage unit contains a specific number of positions;
each is fixed in length. For this reason, high-order
positions are frequently wasted because, in size, the
field is smaller than the unit. [In those days, core memory was still
very expensive, and wasted space was a significant issue, especially
in a lower priced machine.]

In a variable word length machine this does not
happen; there are no fixed groupings of storage positions.
Instead, the size of the grouping varies with
the length of the data or instruction field to be accommodated.

The word mark makes this possible; it
can be set to make any storage position the high-order
position. It can also be erased from storage positions
when field limits change.

http://bitsavers.informatik.uni-stuttgart.de/pdf/ibm/140x/F20-208_1401_GenInfo1959.pdf
page 9
Patrick Scheible
2013-02-28 21:19:08 UTC
Permalink
Post by Stephen Fuld
Post by Shmuel (Seymour J.) Metz
Post by n***@cam.ac.uk
The bizarrest ones I ever saw were ones that packed five 7-bit
characters.
Why is it bizarre for a 36-bit machine to store 5 ASCII characters in
a word? The alternative of storing 4 9-bit bytes in the word wastes
space, since ASCII is only 7 bits.
I wouldn't go as far as bizarre, but it is unusual in that it is
irregular. How did one handle arbitrary length "strings" of ASCII
characters. Did you have to do special coding to skip a bit every
five characters? Was there specific hardware support for this format
to make things easier?
On the PDP-10 there are instructions to pack and unpack any length of
byte from words. I'm sure other word-oriented machines had them as
well.

-- Patrick
John Levine
2013-02-28 23:42:53 UTC
Permalink
Post by Patrick Scheible
On the PDP-10 there are instructions to pack and unpack any length of
byte from words.
Yes.
Post by Patrick Scheible
I'm sure other word-oriented machines had them as well.
No. The GE 635/645 (the Multics machine), for example, could pack and
unpack 6 or 9 bit bytes, but for any other size you had to do your own
shifting and masking.
--
Regards,
John Levine, ***@iecc.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. http://jl.ly
Shmuel (Seymour J.) Metz
2013-02-28 23:36:54 UTC
Permalink
Post by Patrick Scheible
On the PDP-10 there are instructions to pack and unpack any length of
byte from words. I'm sure other word-oriented machines had them as
well.
The VFL unit of the IBM 7030 could handle byte strings with bytes that
crossed word boundaries; the CDC 3600 and 3800 could access single
byte. The B1700 also had byte handling features, but I don't know the
details. I believe that the EIS in GE HoneyBull also had limited byte
handling capability.
--
Shmuel (Seymour J.) Metz, SysProg and JOAT <http://patriot.net/~shmuel>

Unsolicited bulk E-mail subject to legal action. I reserve the
right to publicly post or ridicule any abusive E-mail. Reply to
domain Patriot dot net user shmuel+news to contact me. Do not
reply to ***@library.lspace.org
Morten Reistad
2013-02-28 21:07:45 UTC
Permalink
Post by Stephen Fuld
Post by Shmuel (Seymour J.) Metz
Post by n***@cam.ac.uk
The bizarrest ones I ever saw were ones that packed five 7-bit
characters.
Why is it bizarre for a 36-bit machine to store 5 ASCII characters in
a word? The alternative of storing 4 9-bit bytes in the word wastes
space, since ASCII is only 7 bits.
I wouldn't go as far as bizarre, but it is unusual in that it is
irregular. How did one handle arbitrary length "strings" of ASCII
characters. Did you have to do special coding to skip a bit every five
characters? Was there specific hardware support for this format to make
things easier?
The PDP10 has a BYTE POINTER construct, and 5 instructions that
use these to reference memory.

The byte pointer has fields for byte size, byte offset (as bit
within a word) and the word address.

LDB ac,bp LoaDs the Byte pointed to by bp into register ac
DPB ac,bp DePosits the Byte in ac to the word pointed to by bp
ILDB ac,bp as LDB, but increments the byte pointer
IDPB ac,bp as DPB, but inctements the byte pointer
ADJBP for arithmetic on byte pointers.

The byte pointer can have byte sizes from 1 to 36 and offsets wherever
the remaining byte fits in the word. It does not handle cross-word
bytes, it skips to the start (MSB) of the next word instead.

So, if bp1 is a sixbit array, bp2 is a 7-bit array, tt is a translate
table, and ac1 and ac2 are registers, and cnt is a register initialised
with the string length then (from VERY rusty memory)

loop: ildb ac1,bp1
move ac2,tt(ac1)
idpb ac2,bp2
sosle cnt,1
jrst loop

would copy the string while expanding the bytes using the tt table.

-- mrr
Shmuel (Seymour J.) Metz
2013-02-28 18:54:55 UTC
Permalink
Post by Stephen Fuld
I wouldn't go as far as bizarre, but it is unusual in that it is
irregular.
The PDP-6 and PDP-10 didn't have support for bytes that straddled word
boundaries; droping one bit per word was the simplest way to handle
it.
Post by Stephen Fuld
Was there specific hardware support for this format to make things
easier?
There were generic byte instructions: Adjust Byte Pointer, Deposit
Byte, Increment Byte Pointer, Increment Pointer and Deposit Byte,
Increment Pointer and Load Byte, Load Byte.
--
Shmuel (Seymour J.) Metz, SysProg and JOAT <http://patriot.net/~shmuel>

Unsolicited bulk E-mail subject to legal action. I reserve the
right to publicly post or ridicule any abusive E-mail. Reply to
domain Patriot dot net user shmuel+news to contact me. Do not
reply to ***@library.lspace.org
Patrick Scheible
2013-02-28 20:59:00 UTC
Permalink
Post by n***@cam.ac.uk
Post by Quadibloc
Anyways, in a search for more information, I came to a Wikipedia page
with a link to a PDF of a lecture from a series about "bizarre
architectures", which covered the Control Data 3600 and 6600. They
apparently were bizarre because they had 6-bit characters instead of 8-
bit characters.
I'll have to say that I found that hyperbolic.
6 bit characters were fairly widespread - the ICL 1900 had them, too,
in several variants.
The bizarrest ones I ever saw were ones that packed five 7-bit
characters. I once saw some program code that had been converted
from a (w.l.o.g.) 8-bit machine as binary and then written out
as characters. Problem: get it working on the original system,
starting from the printout. Not that hard, as it was Fortran with
very few multi-digit numbers.
7-bit characters aren't bizarre at all. ASCII is a 7-bit code.
Stands to reason that back when Men were Men and memory spaces were
precious, they'd be packed. Same comment about sixbit, it's enough for
uppercase and digits and symbols, and for many purposes that's all you
need.

-- Patrick
h***@bbs.cpcn.com
2013-03-01 02:37:29 UTC
Permalink
7-bit characters aren't bizarre at all.  ASCII is a 7-bit code.
Stands to reason that back when Men were Men and memory spaces were
precious, they'd be packed.  Same comment about sixbit, it's enough for
uppercase and digits and symbols, and for many purposes that's all you
need.
Some applications had computer generated letters or messages. Often
they used a generic letterhead or plain white paper, but were
generated by the line printer in all caps. A bunch of text
information presented that way is very hard to read, especially single
spaced.

What was sad was that once it became easy to enter and print lower
case letters (via better terminals and xerographic printers in the mid
1980s), many organizations didn't bother upgrading their programs, and
they continued to print in all caps.
Dan Espen
2013-03-01 03:21:47 UTC
Permalink
Post by h***@bbs.cpcn.com
7-bit characters aren't bizarre at all.  ASCII is a 7-bit code.
Stands to reason that back when Men were Men and memory spaces were
precious, they'd be packed.  Same comment about sixbit, it's enough for
uppercase and digits and symbols, and for many purposes that's all you
need.
Some applications had computer generated letters or messages. Often
they used a generic letterhead or plain white paper, but were
generated by the line printer in all caps. A bunch of text
information presented that way is very hard to read, especially single
spaced.
What was sad was that once it became easy to enter and print lower
case letters (via better terminals and xerographic printers in the mid
1980s), many organizations didn't bother upgrading their programs, and
they continued to print in all caps.
Sometime in the mid 90s I was doing tech support for a large z/OS project
written in C and mostly ported from UNIX. Almost all the developers
were refugees from UNIX.

There was lowercase almost everywhere.

The first dump I got in from field had all the function names in the
dump translated to periods (marked as unprintable characters). I had
to get the customer to change their dump options so that lower case
would get printed. Up until then there must not have been enough
lower case in dumps for anyone to care.
--
Dan Espen
n***@cam.ac.uk
2013-03-01 08:07:27 UTC
Permalink
Post by Dan Espen
Post by h***@bbs.cpcn.com
What was sad was that once it became easy to enter and print lower
case letters (via better terminals and xerographic printers in the mid
1980s), many organizations didn't bother upgrading their programs, and
they continued to print in all caps.
Sometime in the mid 90s I was doing tech support for a large z/OS project
written in C and mostly ported from UNIX. Almost all the developers
were refugees from UNIX.
There was lowercase almost everywhere.
Unixoids have traditionaly had trouble locating the shift key :-)
Post by Dan Espen
The first dump I got in from field had all the function names in the
dump translated to periods (marked as unprintable characters). I had
to get the customer to change their dump options so that lower case
would get printed. Up until then there must not have been enough
lower case in dumps for anyone to care.
Back in 1972, when we got our first System/370, one of the first
things that we specified was printer chains that supported both
cases. But Cambridge, in the 1960s, was one of the 2-3 leading
universities in the use of computers in the 'soft' sciences.
At one time in the 1970s, ancient Thibetan was the second most
common language printed on our line printers (in a transliteration
to Latin letters, of course).

So not everywhere was like that, but many IBM shops were.


Regards,
Nick Maclaren.
Peter Flass
2013-03-01 12:51:50 UTC
Permalink
Post by h***@bbs.cpcn.com
Post by Patrick Scheible
7-bit characters aren't bizarre at all. ASCII is a 7-bit code.
Stands to reason that back when Men were Men and memory spaces were
precious, they'd be packed. Same comment about sixbit, it's enough for
uppercase and digits and symbols, and for many purposes that's all you
need.
Some applications had computer generated letters or messages. Often
they used a generic letterhead or plain white paper, but were
generated by the line printer in all caps. A bunch of text
information presented that way is very hard to read, especially single
spaced.
What was sad was that once it became easy to enter and print lower
case letters (via better terminals and xerographic printers in the mid
1980s), many organizations didn't bother upgrading their programs, and
they continued to print in all caps.
One of the first applications I specified at the college was the
printing of financial aid award letters in mixed case, finding a real
use for out new LN03 laser printer that I had ordered because I wanted
one to play with.
--
Pete
Morten Reistad
2013-02-28 13:44:35 UTC
Permalink
Post by Quadibloc
This NASA video on YouTube,
http://youtu.be/nrwpXEiTDVk
shows what is apparently a maintenance console on a Control Data 3x00
computer at around 1:25 into the movie.
The normal console, though, doesn't quite match that of a 3400, 3600,
or 3800.
Anyways, in a search for more information, I came to a Wikipedia page
with a link to a PDF of a lecture from a series about "bizarre
architectures", which covered the Control Data 3600 and 6600. They
apparently were bizarre because they had 6-bit characters instead of 8-
bit characters.
I'll have to say that I found that hyperbolic.
Someone should tell them about radix50. Or IA2/Baudout(sp?), or even
7-bit ascii.

-- mrr
n***@cam.ac.uk
2013-02-28 13:52:05 UTC
Permalink
Post by Morten Reistad
Post by Quadibloc
Anyways, in a search for more information, I came to a Wikipedia page
with a link to a PDF of a lecture from a series about "bizarre
architectures", which covered the Control Data 3600 and 6600. They
apparently were bizarre because they had 6-bit characters instead of 8-
bit characters.
I'll have to say that I found that hyperbolic.
Someone should tell them about radix50. Or IA2/Baudout(sp?), or even
7-bit ascii.
I didn't know radix50, so I looked it up on Wikipedia - which implies
that a single system might use all of 6-bit characters, five 7-bit
characters in 36 bits AND radix 50!

Now, that puts the ICL 1900 into the shade!


Regards,
Nick Maclaren.
Morten Reistad
2013-02-28 14:12:49 UTC
Permalink
Post by n***@cam.ac.uk
Post by Morten Reistad
Post by Quadibloc
Anyways, in a search for more information, I came to a Wikipedia page
with a link to a PDF of a lecture from a series about "bizarre
architectures", which covered the Control Data 3600 and 6600. They
apparently were bizarre because they had 6-bit characters instead of 8-
bit characters.
I'll have to say that I found that hyperbolic.
Someone should tell them about radix50. Or IA2/Baudout(sp?), or even
7-bit ascii.
I didn't know radix50, so I looked it up on Wikipedia - which implies
that a single system might use all of 6-bit characters, five 7-bit
characters in 36 bits AND radix 50!
Now, that puts the ICL 1900 into the shade!
Tops20 shows it's seniority!

The external, linker-visible symbol table is in radix50. Process
"names" are in sixbit. Text is in 7-bit ascii stuffed 5 to a word.

Bizarre, indeed! And this was the system where I first cut my
teeth in programming!

-- mrr
John Levine
2013-02-28 18:45:18 UTC
Permalink
Post by Morten Reistad
Tops20 shows it's seniority!
Oh, you youngsters.
Post by Morten Reistad
The external, linker-visible symbol table is in radix50. Process
"names" are in sixbit. Text is in 7-bit ascii stuffed 5 to a word.
TOPS-10 and its predecessor the PDP-6 monitor used sixbit file names,
7-bit ASCII text in files, and radix50 in object file symbol tables.
We dealt with it.

We all remember what the low bit in a word of 7-bit ASCII was, don't we?
--
Regards,
John Levine, ***@iecc.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. http://jl.ly
Patrick Scheible
2013-02-28 21:23:00 UTC
Permalink
Post by John Levine
Post by Morten Reistad
Tops20 shows it's seniority!
Oh, you youngsters.
Post by Morten Reistad
The external, linker-visible symbol table is in radix50. Process
"names" are in sixbit. Text is in 7-bit ascii stuffed 5 to a word.
TOPS-10 and its predecessor the PDP-6 monitor used sixbit file names,
7-bit ASCII text in files, and radix50 in object file symbol tables.
We dealt with it.
We all remember what the low bit in a word of 7-bit ASCII was, don't we?
It makes sense. When bits are precious, you don't use more of them than
you have to. Editing text files, people needed lowercase, but lowercase
wasn't used in monitor commands or symbol tables.

-- Patrick
jmfbahciv
2013-02-28 14:37:19 UTC
Permalink
Post by Quadibloc
This NASA video on YouTube,
http://youtu.be/nrwpXEiTDVk
shows what is apparently a maintenance console on a Control Data 3x00
computer at around 1:25 into the movie.
The normal console, though, doesn't quite match that of a 3400, 3600,
or 3800.
Anyways, in a search for more information, I came to a Wikipedia page
with a link to a PDF of a lecture from a series about "bizarre
architectures", which covered the Control Data 3600 and 6600. They
apparently were bizarre because they had 6-bit characters instead of 8-
bit characters.
I'll have to say that I found that hyperbolic.
Which ties nicely to the Wiki thread and how wrong it can be.

/BAH
Rod Speed
2013-02-28 20:12:51 UTC
Permalink
Post by jmfbahciv
Post by Quadibloc
This NASA video on YouTube,
http://youtu.be/nrwpXEiTDVk
shows what is apparently a maintenance console on a Control Data 3x00
computer at around 1:25 into the movie.
The normal console, though, doesn't quite match that of a 3400, 3600,
or 3800.
Anyways, in a search for more information, I came to a Wikipedia page
with a link to a PDF of a lecture from a series about "bizarre
architectures", which covered the Control Data 3600 and 6600. They
apparently were bizarre because they had 6-bit characters instead of 8-
bit characters.
I'll have to say that I found that hyperbolic.
Which ties nicely to the Wiki thread and how wrong it can be.
But it still leaves what we had to use before it for dead on update
frequency alone.
jmfbahciv
2013-02-28 14:37:21 UTC
Permalink
Post by n***@cam.ac.uk
In article
Post by Quadibloc
Anyways, in a search for more information, I came to a Wikipedia page
with a link to a PDF of a lecture from a series about "bizarre
architectures", which covered the Control Data 3600 and 6600. They
apparently were bizarre because they had 6-bit characters instead of 8-
bit characters.
I'll have to say that I found that hyperbolic.
Someone should tell them about radix50. Or IA2/Baudout(sp?), or even
7-bit ascii.
I didn't know radix50, so I looked it up on Wikipedia - which implies
that a single system might use all of 6-bit characters, five 7-bit
characters in 36 bits AND radix 50!
Welcome to PDP land.
Post by n***@cam.ac.uk
Now, that puts the ICL 1900 into the shade!
Regards,
Nick Maclaren.
/BAH
Shmuel (Seymour J.) Metz
2013-02-28 16:55:12 UTC
Permalink
Post by jmfbahciv
Welcome to PDP land.
Also IBM land, circa 1961.
--
Shmuel (Seymour J.) Metz, SysProg and JOAT <http://patriot.net/~shmuel>

Unsolicited bulk E-mail subject to legal action. I reserve the
right to publicly post or ridicule any abusive E-mail. Reply to
domain Patriot dot net user shmuel+news to contact me. Do not
reply to ***@library.lspace.org
Quadibloc
2013-02-28 19:08:30 UTC
Permalink
On Feb 28, 9:55 am, Shmuel (Seymour J.) Metz
Post by Shmuel (Seymour J.) Metz
Post by jmfbahciv
Welcome to PDP land.
Also IBM land, circa 1961.
6-bit characters, yes. Radix-40 or Radix-50 and things like that, no.

John Savard
jmfbahciv
2013-02-28 14:37:22 UTC
Permalink
Post by Morten Reistad
In article
Post by Quadibloc
Anyways, in a search for more information, I came to a Wikipedia page
with a link to a PDF of a lecture from a series about "bizarre
architectures", which covered the Control Data 3600 and 6600. They
apparently were bizarre because they had 6-bit characters instead of 8-
bit characters.
I'll have to say that I found that hyperbolic.
6 bit characters were fairly widespread - the ICL 1900 had them, too,
in several variants.
The bizarrest ones I ever saw were ones that packed five 7-bit
characters. I once saw some program code that had been converted
from a (w.l.o.g.) 8-bit machine as binary and then written out
as characters. Problem: get it working on the original system,
starting from the printout. Not that hard, as it was Fortran with
very few multi-digit numbers.
7-bit ascii packet 5 to a (36-bit) word BIZARRE? That is how
it is, on a PDP10. Some of us still run tops20, you know.
You also are using RADIX50, ASCII, SIXBIT, but probably not
Baudot unless you'er doing TTY stuff.

What I find odd is the hint that non-DEC types didn't use all
of the encodings.

/BAH
Andrew Swallow
2013-02-28 16:16:24 UTC
Permalink
Post by jmfbahciv
Post by Morten Reistad
In article
Post by Quadibloc
Anyways, in a search for more information, I came to a Wikipedia page
with a link to a PDF of a lecture from a series about "bizarre
architectures", which covered the Control Data 3600 and 6600. They
apparently were bizarre because they had 6-bit characters instead of 8-
bit characters.
I'll have to say that I found that hyperbolic.
6 bit characters were fairly widespread - the ICL 1900 had them, too,
in several variants.
The bizarrest ones I ever saw were ones that packed five 7-bit
characters. I once saw some program code that had been converted
from a (w.l.o.g.) 8-bit machine as binary and then written out
as characters. Problem: get it working on the original system,
starting from the printout. Not that hard, as it was Fortran with
very few multi-digit numbers.
7-bit ascii packet 5 to a (36-bit) word BIZARRE? That is how
it is, on a PDP10. Some of us still run tops20, you know.
You also are using RADIX50, ASCII, SIXBIT, but probably not
Baudot unless you'er doing TTY stuff.
What I find odd is the hint that non-DEC types didn't use all
of the encodings.
/BAH
On most computers most programmers only used the keyboard characters -
numbers, upper-case, lower-case, the printing symbols plus <space>,
<new_line>, <carriage_return> and <bell>. The line printer programmer
may use <null> and <Form_feed>.

In COBOL numbers could be stored in binary and binary_coded_decimal. A
core dump may use octal and hex.

The EBCDIC and ASCII character computers rarely needed to use other
coding systems unless they were talking to a different make of computer.

Andrew Swallow
h***@bbs.cpcn.com
2013-02-28 17:28:58 UTC
Permalink
In COBOL numbers could be stored in binary and binary_coded_decimal.  A
core dump may use octal and hex.
In IBM COBOL, numbers could be _stored_ in four ways--binary, binary
floating point, packed decimal, or as characters. To do a calculation
with a number requires conversion from character to another format.
I've never seen floating point used in COBOL, though I'm sure someone
has used it. (Today's COBOL has some trig functions).
Scott Lurndal
2013-02-28 18:23:35 UTC
Permalink
Post by Andrew Swallow
In COBOL numbers could be stored in binary and binary_coded_decimal. A
core dump may use octal and hex.
For Burroughs Medium Systems, all numbers in COBOL were stored as either
BCD numbers or EBCDIC numbers (the processor would ignore the zone digit
during arithmetic operations on EBCDIC numbers and add it to the result
if the result was UA (Unsigned Alpha) instead of UN (Unsigned Numeric: BCD)).
Post by Andrew Swallow
The EBCDIC and ASCII character computers rarely needed to use other
coding systems unless they were talking to a different make of computer.
Burroughs B3500 had a processor toggle that would tell the processor to
interpret a byte as ASCII or EBCDIC (mainly for inserting zone digits). This
feature was removed from the B4900 on.

The I/O processor had a flag that would cause inbound and outbound data
transfers to be translated between EBCDIC and ASCII selectivly on each I/O
operation (used, for example, to talk to ASCII Block-mode terminals like the
TD830 or T-27).

scott
n***@cam.ac.uk
2013-02-28 18:38:40 UTC
Permalink
Post by Scott Lurndal
Post by Andrew Swallow
The EBCDIC and ASCII character computers rarely needed to use other
coding systems unless they were talking to a different make of computer.
Burroughs B3500 had a processor toggle that would tell the processor to
interpret a byte as ASCII or EBCDIC (mainly for inserting zone digits). This
feature was removed from the B4900 on.
So did (many? most?) of the System/360 (and System/370?) series.
Rumour has it that they had to fix the architecture before IBM
Galactic Headquarters took a decision to go with EBCDIC, and it
never had any firmware (let alone software) written to go with
it. Nobody knew what would happen if you turned it on :-)


Regards,
Nick Maclaren.
Anne & Lynn Wheeler
2013-02-28 19:00:42 UTC
Permalink
Post by n***@cam.ac.uk
So did (many? most?) of the System/360 (and System/370?) series.
Rumour has it that they had to fix the architecture before IBM
Galactic Headquarters took a decision to go with EBCDIC, and it
never had any firmware (let alone software) written to go with
it. Nobody knew what would happen if you turned it on :-)
recent discussion of 360 ascii in mainframe mailing list
http://www.garlic.com/~lynn/2013b.html#72 One reasonf or monocase was Re: Dualcase vs monocase
http://www.garlic.com/~lynn/2013b.html#73 One reasonf or monocase was Re: Dualcase vs monocase

"The Biggest Computer Goof Ever":
http://www.bobbemer.com/P-BIT.HTM

from above:

I mention this because it is a classic software mistake. IBM was going
to announce the 360 in 1964 April as an ASCII machine, but their
printers and punches were not ready to handle ASCII, and IBM just HAD to
announce. So T.V. Learson (my boss's boss) decided to do both, as IBM
had a store of spendable money. They put in the P-bit. Set one way, it
ran in EBCDIC. Set the other way, it ran in ASCII.

But nobody told the programmers, like a Chinese Army in numbers! They
spent this huge amount of money to make software in which EBCDIC
encodings were used in the logic. Reverse the P-bit, to work in ASCII,
and it died. And they just could not spend that much money again to redo
it.

.... snip ... and (one of the Consequences):

Although some IBM customers would stay with all upper case for a while,
the introduction of lower case would destroy all collating precedent,
and IBM knew that, too. Especially from the STRETCH design in 1958,
where I made a big mistake in setting the collating sequence as
"A-a-B-b-C ..." [2]. Ordering alphabetically in dual case must be a
two-step process -- first on the letter itself, and then on the quality
of the letter (its case).

... snip ...

IBM ASCII reference also mentions getting collating sequence wrong in
STRETCH
--
virtualization experience starting Jan1968, online at home since Mar1970
Peter Flass
2013-02-28 20:26:33 UTC
Permalink
Post by Anne & Lynn Wheeler
Post by n***@cam.ac.uk
So did (many? most?) of the System/360 (and System/370?) series.
Rumour has it that they had to fix the architecture before IBM
Galactic Headquarters took a decision to go with EBCDIC, and it
never had any firmware (let alone software) written to go with
it. Nobody knew what would happen if you turned it on :-)
recent discussion of 360 ascii in mainframe mailing list
http://www.garlic.com/~lynn/2013b.html#72 One reasonf or monocase was Re: Dualcase vs monocase
http://www.garlic.com/~lynn/2013b.html#73 One reasonf or monocase was Re: Dualcase vs monocase
http://www.bobbemer.com/P-BIT.HTM
I mention this because it is a classic software mistake. IBM was going
to announce the 360 in 1964 April as an ASCII machine, but their
printers and punches were not ready to handle ASCII, and IBM just HAD to
announce. So T.V. Learson (my boss's boss) decided to do both, as IBM
had a store of spendable money. They put in the P-bit. Set one way, it
ran in EBCDIC. Set the other way, it ran in ASCII.
I've never been clear exactly what this would have done. A character is
just an n-bit chunk with no intrinsic interpretation. It only becomes a
character when it is printed, or punched.
--
Pete
n***@cam.ac.uk
2013-02-28 20:45:20 UTC
Permalink
Post by Peter Flass
Post by Anne & Lynn Wheeler
I mention this because it is a classic software mistake. IBM was going
to announce the 360 in 1964 April as an ASCII machine, but their
printers and punches were not ready to handle ASCII, and IBM just HAD to
announce. So T.V. Learson (my boss's boss) decided to do both, as IBM
had a store of spendable money. They put in the P-bit. Set one way, it
ran in EBCDIC. Set the other way, it ran in ASCII.
I've never been clear exactly what this would have done. A character is
just an n-bit chunk with no intrinsic interpretation. It only becomes a
character when it is printed, or punched.
Er, no, not on that architecture. There were some instructions
that translated to and from characters.



Regards,
Nick Maclaren.
Paul A. Clayton
2013-02-28 20:51:06 UTC
Permalink
[snip]
Post by Peter Flass
Post by Anne & Lynn Wheeler
http://www.bobbemer.com/P-BIT.HTM
I mention this because it is a classic software mistake. IBM was going
to announce the 360 in 1964 April as an ASCII machine, but their
printers and punches were not ready to handle ASCII, and IBM just HAD to
announce. So T.V. Learson (my boss's boss) decided to do both, as IBM
had a store of spendable money. They put in the P-bit. Set one way, it
ran in EBCDIC. Set the other way, it ran in ASCII.
I've never been clear exactly what this would have done. A character is
just an n-bit chunk with no intrinsic interpretation. It only becomes a
character when it is printed, or punched.
If the ISA has instructions to convert from character
data to Binary Coded Decimal and vice versa, then the
character encoding can make a difference. (I think
such operations are called Pack and Unpack on the
IBM zSeries, formerly S/3[679]0.)

For software, it obviously can make a difference for
sorting or changing capitalization.
Scott Lurndal
2013-02-27 21:27:31 UTC
Permalink
Post by Peter Flass
Post by Anne & Lynn Wheeler
Post by n***@cam.ac.uk
So did (many? most?) of the System/360 (and System/370?) series.
Rumour has it that they had to fix the architecture before IBM
Galactic Headquarters took a decision to go with EBCDIC, and it
never had any firmware (let alone software) written to go with
it. Nobody knew what would happen if you turned it on :-)
recent discussion of 360 ascii in mainframe mailing list
http://www.garlic.com/~lynn/2013b.html#72 One reasonf or monocase was Re: Dualcase vs monocase
http://www.garlic.com/~lynn/2013b.html#73 One reasonf or monocase was Re: Dualcase vs monocase
http://www.bobbemer.com/P-BIT.HTM
I mention this because it is a classic software mistake. IBM was going
to announce the 360 in 1964 April as an ASCII machine, but their
printers and punches were not ready to handle ASCII, and IBM just HAD to
announce. So T.V. Learson (my boss's boss) decided to do both, as IBM
had a store of spendable money. They put in the P-bit. Set one way, it
ran in EBCDIC. Set the other way, it ran in ASCII.
I've never been clear exactly what this would have done. A character is
just an n-bit chunk with no intrinsic interpretation. It only becomes a
character when it is printed, or punched.
When you read in a 80-column card with numeric values, the EBCDIC digits
are stored as F0-F9, but ASCII are stored as 30-39. ASCII software
would expect 30, EBCDIC software F0. Likewise for most other symbols.

The collating sequence also is quite different.

I would have though it would have been less expensive for IBM to just put some
translation hardware in the control unit or I/O susbystem rather than adding
the "P" bit.

scott
Peter Flass
2013-03-01 01:59:31 UTC
Permalink
Post by Scott Lurndal
Post by Peter Flass
Post by Anne & Lynn Wheeler
Post by n***@cam.ac.uk
So did (many? most?) of the System/360 (and System/370?) series.
Rumour has it that they had to fix the architecture before IBM
Galactic Headquarters took a decision to go with EBCDIC, and it
never had any firmware (let alone software) written to go with
it. Nobody knew what would happen if you turned it on :-)
recent discussion of 360 ascii in mainframe mailing list
http://www.garlic.com/~lynn/2013b.html#72 One reasonf or monocase was Re: Dualcase vs monocase
http://www.garlic.com/~lynn/2013b.html#73 One reasonf or monocase was Re: Dualcase vs monocase
http://www.bobbemer.com/P-BIT.HTM
I mention this because it is a classic software mistake. IBM was going
to announce the 360 in 1964 April as an ASCII machine, but their
printers and punches were not ready to handle ASCII, and IBM just HAD to
announce. So T.V. Learson (my boss's boss) decided to do both, as IBM
had a store of spendable money. They put in the P-bit. Set one way, it
ran in EBCDIC. Set the other way, it ran in ASCII.
I've never been clear exactly what this would have done. A character is
just an n-bit chunk with no intrinsic interpretation. It only becomes a
character when it is printed, or punched.
When you read in a 80-column card with numeric values, the EBCDIC digits
are stored as F0-F9, but ASCII are stored as 30-39. ASCII software
would expect 30, EBCDIC software F0. Likewise for most other symbols.
Yes, but this is done by the control unit (as you allude later) and the
P bit has no effect on what is stored.
Post by Scott Lurndal
The collating sequence also is quite different.
True, but again irrelevant to the P bit, I believe.

I think the bit had some part in what signs were used for packed decimal
data, and probably the operation of the PACK and UNPK instructions.
Post by Scott Lurndal
I would have though it would have been less expensive for IBM to just put some
translation hardware in the control unit or I/O susbystem rather than adding
the "P" bit.
--
Pete
Anne & Lynn Wheeler
2013-03-01 02:36:31 UTC
Permalink
Post by Peter Flass
Yes, but this is done by the control unit (as you allude later) and
the P bit has no effect on what is stored.
re:
http://www.garlic.com/~lynn/2013c.html#14 What Makes an Architecture Bizarre?

there was actually a different kind of problem involving IBM termianls
and ASCII terminals.

IBM terminals weren't ebcdic ... and so terminal data had to be
translated back&forth between terminal bit pattern and ebcdic bit
pattern.

when cp67 was delivered to the univ, it had 1052 & 2741 terminal support
(with appropriate translate tables). The univ had some number of
tty/ascii terminals and I added tty/ascii terminal support and the
appropriate translate tables ... the issue was that there are some chars
in ascii that aren't in ebcdic as well as the reverse ... so there was
some issues of mapping ascii characters (not defined in ebcdic) to
something ... as well mapping some ebcdic characters (not defined in
ascii) to something.

the other issue was that cp67 code as delivered did automatic terminal
type identification ... and dynamically changed the terminal controller
line-scanner to the appropriate one for that kind of terminal. when I
added tty/ascii support ... I extended the automatic terminal type
processing to include tty/ascii. I actually tried to have a single
dial-in number (for all types of dialup terminals; with common
hunt-group ... pool of numbers and corresponding controller port
connections). it turned out that they had taken some sort cuts in the
ibm terminal controller ... while it was possibly to dynamicall change
the terminal type line-scanner for each controller port ... they
hard-wired each port line-speed. It wasn't problem for common pool for
1052 & 2741 terminals since they operated at the same line speed ...
but it was a problem for tty/ascii which operated at different line
speed.

this was part of the motivation for the univ to start a clone controller
project, started with interdata/3 programmed to emulate the ibm terminal
controller (but supporting adapting both the port line scanner as well
as the port line speed) ... as building a controller channel I/O
interface board for the interdata/3 (later it evolved into an
interdata/4 for the channel interface with pool of interdata/3s for
handling ports). early bug was data arriving in 360 memory all garbled
... it turns out that the ibm terminal controller convention was to
place the arriving leading bit in the low-order bit position in the byte
and the fill the byte in reverse direction as the bits arrived ... then
transmit each byte to 360 memory (so bits within byte were in reverse
order of arrival). The initial testing with interdata/3 had bits in the
byte in bit arrival order (not reverse order) ... aka 360 terminal
standard had terminal/line ascii in bit-reversed order.

later, four of us are written up as being responsible for (some part of)
ibm clone controller business. ... some past posts
http://www.garlic.com/~lynn/subtopic.html#360pcm

later still, the folklore is that major motivation for IBM's Future
System effort (completely replace 360 with something radically
different) was to make the controller interface so complex that it would
be a barrier to clone controller business. misc. past posts mentioning
(failed) future system
http://www.garlic.com/~lynn/submain.html#futuresys

The rise and fall of IBM
http://www.ecole.org/en/seances/CM07

from above:

IBM tried to react by launching a major project called the 'Future
System' (FS) in the early 1970's. The idea was to get so far ahead that
the competition would never be able to keep up, and to have such a high
level of integration that it would be impossible for competitors to
follow a compatible niche strategy. However, the project failed because
the objectives were too ambitious for the available technology. Many of
the ideas that were developed were nevertheless adapted for later
generations. Once IBM had acknowledged this failure, it launched its
'box strategy', which called for competitiveness with all the different
types of compatible sub-systems.

... snip ...

actually lots of FS was pure blue sky more like vaporware

--
virtualization experience starting Jan1968, online at home since Mar1970
Peter Flass
2013-03-01 12:48:51 UTC
Permalink
Post by Anne & Lynn Wheeler
there was actually a different kind of problem involving IBM termianls
and ASCII terminals.
IBM terminals weren't ebcdic ... and so terminal data had to be
translated back&forth between terminal bit pattern and ebcdic bit
pattern.
I recently re-discovered this and it surprised me - the 2741, the
premier IBM hardcopy terminal for CMS was BCD!
--
Pete
Stephen Fuld
2013-02-28 21:32:17 UTC
Permalink
Post by Peter Flass
Post by Anne & Lynn Wheeler
Post by n***@cam.ac.uk
So did (many? most?) of the System/360 (and System/370?) series.
Rumour has it that they had to fix the architecture before IBM
Galactic Headquarters took a decision to go with EBCDIC, and it
never had any firmware (let alone software) written to go with
it. Nobody knew what would happen if you turned it on :-)
recent discussion of 360 ascii in mainframe mailing list
http://www.garlic.com/~lynn/2013b.html#72 One reasonf or monocase was
Re: Dualcase vs monocase
http://www.garlic.com/~lynn/2013b.html#73 One reasonf or monocase was
Re: Dualcase vs monocase
http://www.bobbemer.com/P-BIT.HTM
I mention this because it is a classic software mistake. IBM was going
to announce the 360 in 1964 April as an ASCII machine, but their
printers and punches were not ready to handle ASCII, and IBM just HAD to
announce. So T.V. Learson (my boss's boss) decided to do both, as IBM
had a store of spendable money. They put in the P-bit. Set one way, it
ran in EBCDIC. Set the other way, it ran in ASCII.
I've never been clear exactly what this would have done. A character is
just an n-bit chunk with no intrinsic interpretation. It only becomes a
character when it is printed, or punched.
How about when it is input from a device such as punched card reader or
a terminal? You really want to know, for example, whether numbers
collate higher or lower than characters.
--
- Stephen Fuld
(e-mail address disguised to prevent spam)
h***@bbs.cpcn.com
2013-03-01 02:52:28 UTC
Permalink
Post by Stephen Fuld
How about when it is input from a device such as punched card reader or
a terminal?  You really want to know, for example, whether numbers
collate higher or lower than characters.
The collating sequence on a punched card is tricky because numbers are
represented one a single punch while letters are represented by two
punches. IBM machines had/have instructions to move just the 'zone'
or the 'numeric' internally.

If you're sorting the cards on a card sorter, you would need two
passes for alpha data. Since numeric data does not have a zone punch
(unless it's signed), it would be a space and show up on top.

As to terminal entry, what counts is how the entry is ultimately
stored in the machine.

On IBM mainframes, when sorting, one specifies the format of the data--
character, packed decimal, binary, etc.
MitchAlsup
2013-03-01 04:19:20 UTC
Permalink
Post by Anne & Lynn Wheeler
Although some IBM customers would stay with all upper case for a while,
the introduction of lower case would destroy all collating precedent,
and IBM knew that, too. Especially from the STRETCH design in 1958,
where I made a big mistake in setting the collating sequence as
"A-a-B-b-C ..." [2].
Thank you for your candor.

Mitch
n***@cam.ac.uk
2013-03-01 08:12:10 UTC
Permalink
Post by MitchAlsup
Post by Anne & Lynn Wheeler
Although some IBM customers would stay with all upper case for a while,
the introduction of lower case would destroy all collating precedent,
and IBM knew that, too. Especially from the STRETCH design in 1958,
where I made a big mistake in setting the collating sequence as
"A-a-B-b-C ..." [2].
Thank you for your candor.
Refreshing, isn't it? But the difficulty is that that collating
sequence is perhaps the most user-friendly for non-geeks, and is
now on the increase again after its abeyance. Collation is not
as simple as just defining an order :-)


Regards,
Nick Maclaren.
unknown
2013-03-01 10:51:04 UTC
Permalink
Post by n***@cam.ac.uk
Post by MitchAlsup
Post by Anne & Lynn Wheeler
Although some IBM customers would stay with all upper case for a while,
the introduction of lower case would destroy all collating precedent,
and IBM knew that, too. Especially from the STRETCH design in 1958,
where I made a big mistake in setting the collating sequence as
"A-a-B-b-C ..." [2].
Thank you for your candor.
Refreshing, isn't it? But the difficulty is that that collating
sequence is perhaps the most user-friendly for non-geeks, and is
now on the increase again after its abeyance. Collation is not
as simple as just defining an order :-)
I've made the (naive?) mistake once of defining a collating sequence
similar to the one above, the key being that it was reversible, i.e. I
could do a table-lookup translation, do all the work I needed, and then
translate back just before final printout/file write.

This saved memory for keeping two copies and/or cpu cycles to do all the
translation as part of the (sort) comparison routine.

Terje
--
- <Terje.Mathisen at tmsw.no>
"almost all programming can be viewed as an exercise in caching"
Mark Thorson
2013-02-28 23:36:18 UTC
Permalink
Post by n***@cam.ac.uk
Post by Scott Lurndal
Post by Andrew Swallow
The EBCDIC and ASCII character computers rarely needed to use other
coding systems unless they were talking to a different make of computer.
Burroughs B3500 had a processor toggle that would tell the processor to
interpret a byte as ASCII or EBCDIC (mainly for inserting zone digits). This
feature was removed from the B4900 on.
So did (many? most?) of the System/360 (and System/370?) series.
Rumour has it that they had to fix the architecture before IBM
Galactic Headquarters took a decision to go with EBCDIC, and it
never had any firmware (let alone software) written to go with
it. Nobody knew what would happen if you turned it on :-)
The story of that decision is here:

http://www.bobbemer.com/P-BIT.HTM
Shmuel (Seymour J.) Metz
2013-02-28 19:05:23 UTC
Permalink
Post by n***@cam.ac.uk
Post by Scott Lurndal
Post by Andrew Swallow
The EBCDIC and ASCII character computers rarely needed to use other
coding systems unless they were talking to a different make of computer.
Burroughs B3500 had a processor toggle that would tell the processor to
interpret a byte as ASCII or EBCDIC (mainly for inserting zone digits). This
feature was removed from the B4900 on.
So did (many? most?) of the System/360 (and System/370?) series.
No; the S/360 had a PSW bit, not a toggle swiotch, and the S/370 had
neither.
Post by n***@cam.ac.uk
Rumour has it that they had to fix the architecture before IBM
Galactic Headquarters took a decision to go with EBCDIC, and it
never had any firmware (let alone software) written to go with it.
Nobody knew what would happen if you turned it on :-)
That's the first I've heard of it. Given IBM's attitude towards the
architecture specifications, I'd be very surprised if it wasn't tested
thoroughly. What is true is that IBM didn't release an OS that
supported the ASCII bit.
--
Shmuel (Seymour J.) Metz, SysProg and JOAT <http://patriot.net/~shmuel>

Unsolicited bulk E-mail subject to legal action. I reserve the
right to publicly post or ridicule any abusive E-mail. Reply to
domain Patriot dot net user shmuel+news to contact me. Do not
reply to ***@library.lspace.org
Robert Wessel
2013-03-01 07:21:04 UTC
Permalink
Post by n***@cam.ac.uk
Post by Scott Lurndal
Post by Andrew Swallow
The EBCDIC and ASCII character computers rarely needed to use other
coding systems unless they were talking to a different make of computer.
Burroughs B3500 had a processor toggle that would tell the processor to
interpret a byte as ASCII or EBCDIC (mainly for inserting zone digits). This
feature was removed from the B4900 on.
So did (many? most?) of the System/360 (and System/370?) series.
Rumour has it that they had to fix the architecture before IBM
Galactic Headquarters took a decision to go with EBCDIC, and it
never had any firmware (let alone software) written to go with
it. Nobody knew what would happen if you turned it on :-)
(addressing several posts in this subthread)

S/360 had an ASCII bit, which impacted the operation of several
instructions. This was implemented on most (or all) S/360s, although
it's possible that some of the odder machines omitted it.

That mode bit was reused on S/370 to select "Extended Control" mode,
which rearranged some system level stuff and was a prerequisite to
enabled paging. So no S/370 included ASCII mode. Note that the
360/67 had a different method of enabling its equivalent of EC mode,
and it retained ASCII mode in both Basic and Extended modes.

ASCII mode only impacted a limited number of instructions, and only in
what their outputs were.

Instructions that generated packed decimal results (AP, SP, ZAP, MP,
DP, CVD) would generate the ASCII preferred sign nibbles (0x?A and
0x?B) instead of the EBCDIC ones (0x?C and 0x?D) . Note that all of
those instructions (to this day) always accept both sets of signs as
input.

Instructions that expanded packed decimal fields into character fields
(UNPK, ED, EDMK) would generate either the EBCIDIC (0xf?) or ASCII
(0x3?) high nibble as appropriate for the mode. Again, inputs were
not affected.

I'm pretty sure the sum total impact of ASCII mode was on the outputs
of those nine instructions.

Despite basically never having been used, it seems unlikely that a
mode with such minor impact wouldn't have been tested, especially as
it was documented from day one.

I/O devices were a different issue, particularly printers and punches
(disk and tape devices, for example, were mostly insensitive to the
character coding). And it may have been punch card equipment
specifically, since the way print trains were mapped on most printers
of that era (basically there was storage associating a character code
with each slug on the print train, and when the slug matching the
printed character passed the correct position the hammer was
triggered)) should have made an ASCII interpretation pretty
straight-forward, with the possible exception of folding (automatic
lower-to-uppercase conversion). But those are pretty much outside the
CPU.

But IBM never delivered any OS's using ASCII mode. In more recent
machines a number of instructions have been added to better support
ASCII, although the decimal instructions still use only the EBCDIC
"mode".
Stephen Fuld
2013-03-01 08:38:05 UTC
Permalink
On 2/28/2013 11:21 PM, Robert Wessel wrote:

snip
Post by Robert Wessel
I/O devices were a different issue, particularly printers and punches
(disk and tape devices, for example, were mostly insensitive to the
character coding).
I vaguely remember a parameter on the DD card that invoked ASCII for
tapes. IIRC, it had to do with the ability to read ANSI standard
interchange format, which had ASCII data, and had ANSI standard tape
labels, which, besides using ASCII had a slightly different format than
IBM standard labels. Whether this all did hardware translate, and if
so, where in the system the hardware was located, is lost in my memory
banks. :-(
--
- Stephen Fuld
(e-mail address disguised to prevent spam)
Robert Wessel
2013-03-01 09:12:17 UTC
Permalink
On Fri, 01 Mar 2013 00:38:05 -0800, Stephen Fuld
Post by Stephen Fuld
snip
Post by Robert Wessel
I/O devices were a different issue, particularly printers and punches
(disk and tape devices, for example, were mostly insensitive to the
character coding).
I vaguely remember a parameter on the DD card that invoked ASCII for
tapes. IIRC, it had to do with the ability to read ANSI standard
interchange format, which had ASCII data, and had ANSI standard tape
labels, which, besides using ASCII had a slightly different format than
IBM standard labels. Whether this all did hardware translate, and if
so, where in the system the hardware was located, is lost in my memory
banks. :-(
OPTCD=Q on the DCB subparameter.

That was (is) strictly translated by software. Of course it was
mostly useless, since it had no way to specify what parts of the input
record were binary.

There was other oddness for 7-track tape support.
n***@cam.ac.uk
2013-03-01 09:26:33 UTC
Permalink
Post by Robert Wessel
On Fri, 01 Mar 2013 00:38:05 -0800, Stephen Fuld
Post by Stephen Fuld
Post by Robert Wessel
I/O devices were a different issue, particularly printers and punches
(disk and tape devices, for example, were mostly insensitive to the
character coding).
I vaguely remember a parameter on the DD card that invoked ASCII for
tapes. IIRC, it had to do with the ability to read ANSI standard
interchange format, which had ASCII data, and had ANSI standard tape
labels, which, besides using ASCII had a slightly different format than
IBM standard labels. Whether this all did hardware translate, and if
so, where in the system the hardware was located, is lost in my memory
banks. :-(
OPTCD=Q on the DCB subparameter.
That was (is) strictly translated by software. Of course it was
mostly useless, since it had no way to specify what parts of the input
record were binary.
There was other oddness for 7-track tape support.
And how! Not just on IBM, either - ICL short-inter-block-gap format
was a particularly delightful aberration - except to people who
wanted to read old tapes :-)

I have a program that I wrote to unpick 1/2" tapes inder Unix, and
it handles a large number of MVS formats (including pretty well all
9-track formats, IEBCOPY dumped PDSs etc.), but it would have
hysterics if I fed it a 7-track one.


Regards,
Nick Maclaren.
Quadibloc
2013-02-28 19:10:35 UTC
Permalink
Post by Andrew Swallow
On most computers most programmers only used the keyboard characters -
numbers, upper-case, lower-case, the printing symbols plus <space>,
<new_line>, <carriage_return> and <bell>.  The line printer programmer
may use <null> and <Form_feed>.
In the IBM world, applications programmers didn't deal with control
characters. All characters were treated as printable. If you had
variable length character strings, you indicated this with a length,
not a terminating character. If you wanted to tell a printer to double
space or overprint, you indicated that with the first character of the
line of text you sent to it.

John Savard
h***@bbs.cpcn.com
2013-02-28 15:49:17 UTC
Permalink
Post by Quadibloc
Anyways, in a search for more information, I came to a Wikipedia page
with a link to a PDF of a lecture from a series about "bizarre
architectures", which covered the Control Data 3600 and 6600. They
apparently were bizarre because they had 6-bit characters instead of 8-
bit characters.
I thought the CDC machines had a 60 bit word.

6 bit characters were once common--the 14xx series used them. Indeed,
there was a debate during design of System/360 over whether to use 6
or 8 bit characters. Blaauw sought 8 bits, Amdahl sort 6 bits.
(IBM's 360, pp 148-149).
Quadibloc
2013-02-28 19:01:50 UTC
Permalink
Post by h***@bbs.cpcn.com
Post by Quadibloc
Anyways, in a search for more information, I came to a Wikipedia page
with a link to a PDF of a lecture from a series about "bizarre
architectures", which covered the Control Data 3600 and 6600. They
apparently were bizarre because they had 6-bit characters instead of 8-
bit characters.
I thought the CDC machines had a 60 bit word.
Yes, the 6600 had a 60-bit word of ten 6-bit characters, and the 3600
had a 48-bit word of eight 6-bit characters.

John Savard
Patrick Scheible
2013-02-28 21:24:53 UTC
Permalink
Post by h***@bbs.cpcn.com
Post by Quadibloc
Anyways, in a search for more information, I came to a Wikipedia page
with a link to a PDF of a lecture from a series about "bizarre
architectures", which covered the Control Data 3600 and 6600. They
apparently were bizarre because they had 6-bit characters instead of 8-
bit characters.
I thought the CDC machines had a 60 bit word.
6 bit characters were once common--the 14xx series used them. Indeed,
there was a debate during design of System/360 over whether to use 6
or 8 bit characters. Blaauw sought 8 bits, Amdahl sort 6 bits.
(IBM's 360, pp 148-149).
They should have compromised and used 7 :)

-- Patrick
Tom Gardner
2013-02-28 16:28:44 UTC
Permalink
Post by Quadibloc
This NASA video on YouTube,
http://youtu.be/nrwpXEiTDVk
shows what is apparently a maintenance console on a Control Data 3x00
computer at around 1:25 into the movie.
The normal console, though, doesn't quite match that of a 3400, 3600,
or 3800.
Anyways, in a search for more information, I came to a Wikipedia page
with a link to a PDF of a lecture from a series about "bizarre
architectures", which covered the Control Data 3600 and 6600. They
apparently were bizarre because they had 6-bit characters instead of 8-
bit characters.
I'll have to say that I found that hyperbolic.
Yes indeed, or maybe it is just a lack of imagination.

I rather like machines with a 39-bit word and two instructions per word.
Power of two? Even number of bits? Pah.

I believe there's a working Elliott 803 at Bletchley Park. Must go and
pay my respects sometime.
h***@bbs.cpcn.com
2013-02-28 18:45:16 UTC
Permalink
Post by Tom Gardner
I rather like machines with a 39-bit word and two instructions per word.
Power of two? Even number of bits? Pah.
The Bendix* G-15 appears to have had a 29 bit word (28 for magnitude,
one for the sign).

http://bitsavers.informatik.uni-stuttgart.de/pdf/bendix/g-15/APR-01601-1_G15_Programming_Part1_Jul61.pdf


*Total trivia note: The manual gave a street address for Bendix. I
checked it out on google streetview and found that the building at
that address is now a rent-a-car facility. The area is close to the
airport, and it seems most stuff there is to serve the airport,
including big parking lots.

It can be interesting looking up such old corporate addresses on
streetview to see if the company is still there or what has replaced
it. For instance, in the 1950s IBM ran an ad touting how its tab
machines help a company called Camloc. They're still in business at
the same location--the facade of the building looks the same as in the
ad.
Peter Flass
2013-02-28 20:22:55 UTC
Permalink
Post by h***@bbs.cpcn.com
Post by Tom Gardner
I rather like machines with a 39-bit word and two instructions per word.
Power of two? Even number of bits? Pah.
The Bendix* G-15 appears to have had a 29 bit word (28 for magnitude,
one for the sign).
http://bitsavers.informatik.uni-stuttgart.de/pdf/bendix/g-15/APR-01601-1_G15_Programming_Part1_Jul61.pdf
*Total trivia note: The manual gave a street address for Bendix. I
checked it out on google streetview and found that the building at
that address is now a rent-a-car facility. The area is close to the
airport, and it seems most stuff there is to serve the airport,
including big parking lots.
It can be interesting looking up such old corporate addresses on
streetview to see if the company is still there or what has replaced
it. For instance, in the 1950s IBM ran an ad touting how its tab
machines help a company called Camloc. They're still in business at
the same location--the facade of the building looks the same as in the
ad.
CTG's original headquarters is now (IIRC) an insurance agency and other
stuff. I think their current HQ is the old IBM office on Delaware Avenue
in Buffalo.
--
Pete
Scott Lurndal
2013-02-27 21:31:57 UTC
Permalink
Post by Peter Flass
Post by h***@bbs.cpcn.com
Post by Tom Gardner
I rather like machines with a 39-bit word and two instructions per word.
Power of two? Even number of bits? Pah.
The Bendix* G-15 appears to have had a 29 bit word (28 for magnitude,
one for the sign).
http://bitsavers.informatik.uni-stuttgart.de/pdf/bendix/g-15/APR-01601-1_G15_Programming_Part1_Jul61.pdf
*Total trivia note: The manual gave a street address for Bendix. I
checked it out on google streetview and found that the building at
that address is now a rent-a-car facility. The area is close to the
airport, and it seems most stuff there is to serve the airport,
including big parking lots.
It can be interesting looking up such old corporate addresses on
streetview to see if the company is still there or what has replaced
it. For instance, in the 1950s IBM ran an ad touting how its tab
machines help a company called Camloc. They're still in business at
the same location--the facade of the building looks the same as in the
ad.
CTG's original headquarters is now (IIRC) an insurance agency and other
stuff. I think their current HQ is the old IBM office on Delaware Avenue
in Buffalo.
Last I was there, the Burroughs Pasadena facility (Electrodata in the 50's,
sold by Unisys in 1992) is now NASA/JPL on the ground floor, and a 24-hour
fitness in the basement.

http://maps.google.com/?ll=34.154352,-118.081647&spn=0.001676,0.001996&t=m&z=19&layer=c&cbll=34.154539,-118.081963&panoid=lQY0zha2n3bciFWUcIGJPA&cbp=12,74.68,,0,0.48

scott
Ivan Godard
2013-02-28 22:30:19 UTC
Permalink
Post by Scott Lurndal
Last I was there, the Burroughs Pasadena facility (Electrodata in the 50's,
sold by Unisys in 1992) is now NASA/JPL on the ground floor, and a 24-hour
fitness in the basement.
http://maps.google.com/?ll=34.154352,-118.081647&spn=0.001676,0.001996&t=m&z=19&layer=c&cbll=34.154539,-118.081963&panoid=lQY0zha2n3bciFWUcIGJPA&cbp=12,74.68,,0,0.48
I wrote my first compiler in that building.

Ivan
Patrick Scheible
2013-02-28 21:26:39 UTC
Permalink
Post by Tom Gardner
Post by Quadibloc
This NASA video on YouTube,
http://youtu.be/nrwpXEiTDVk
shows what is apparently a maintenance console on a Control Data 3x00
computer at around 1:25 into the movie.
The normal console, though, doesn't quite match that of a 3400, 3600,
or 3800.
Anyways, in a search for more information, I came to a Wikipedia page
with a link to a PDF of a lecture from a series about "bizarre
architectures", which covered the Control Data 3600 and 6600. They
apparently were bizarre because they had 6-bit characters instead of 8-
bit characters.
I'll have to say that I found that hyperbolic.
Yes indeed, or maybe it is just a lack of imagination.
I rather like machines with a 39-bit word and two instructions per word.
Are there real machines like that? 19 bits per instruction and one
extra bit?

-- Patrick
Bill Findlay
2013-02-28 22:04:05 UTC
Permalink
Post by Patrick Scheible
...
Post by Tom Gardner
Yes indeed, or maybe it is just a lack of imagination.
I rather like machines with a 39-bit word and two instructions per word.
Are there real machines like that? 19 bits per instruction and one
extra bit?
Yes, the Elliott 803A/B and 503.

If the extra bit was set, the operand of the first instruction in the word
acted as a modifier on the effective address of the second instruction.

Elliott computers were all quite ... eccentric.
--
Bill Findlay
with blueyonder.co.uk;
use surname & forename;
Tom Gardner
2013-02-28 23:27:25 UTC
Permalink
Post by Patrick Scheible
Post by Tom Gardner
Post by Quadibloc
This NASA video on YouTube,
http://youtu.be/nrwpXEiTDVk
shows what is apparently a maintenance console on a Control Data 3x00
computer at around 1:25 into the movie.
The normal console, though, doesn't quite match that of a 3400, 3600,
or 3800.
Anyways, in a search for more information, I came to a Wikipedia page
with a link to a PDF of a lecture from a series about "bizarre
architectures", which covered the Control Data 3600 and 6600. They
apparently were bizarre because they had 6-bit characters instead of 8-
bit characters.
I'll have to say that I found that hyperbolic.
Yes indeed, or maybe it is just a lack of imagination.
I rather like machines with a 39-bit word and two instructions per word.
Are there real machines like that? 19 bits per instruction and one
extra bit?
Yup, as I indicated, the Elliott 803.

To tie in with the other sub-thread, it used 5bit teletypes
for human i/o; the interesting characters were figure-shift
and letter-shift which specified that later characters were
figures/symbols or letters respectively.

Online storage used magnetic *film*, effectively 35mm wide
sprocketed magnetic tape. The designers were just down the
road from Borehamwood movie studios and their associated
film processing industries.
MitchAlsup
2013-02-28 17:42:35 UTC
Permalink
which covered the Control Data 3600 and 6600. They apparently were bizarre
because they had 6-bit characters instead of 8- bit characters. I'll have
to say that I found that hyperbolic.
Then why did Radix 50 characters not get the bizarre Label?

Mitch
Anton Ertl
2013-02-28 18:25:38 UTC
Permalink
Post by Quadibloc
Anyways, in a search for more information, I came to a Wikipedia page
with a link to a PDF of a lecture from a series about "bizarre
architectures", which covered the Control Data 3600 and 6600. They
apparently were bizarre because they had 6-bit characters instead of 8-
bit characters.
I'll have to say that I found that hyperbolic.
Well, given an unspecified link in an unspecified Wikipedia page, I
have no basus for agreeing or disagreeing with your evaluation.

In any case, at the time of the CDC 6600 word-addresses machines with
subdivisions of words into any convenient byte counts were the norm,
so that wasn't bizarre at the time. The IBM S/360 introduced byte
addressing and the 8-bit byte, as discussed not too long ago; and the
PDP-11 then also chose byte addressing and 8-bit bytes, and the 8008
and 6800 as well; and this all led to byte addressing and 8-bit bytes
becoming a de-facto standard; there are still word-addressed machines
in embedded CPUs, though.

- anton
--
M. Anton Ertl Some things have to be seen to be believed
***@mips.complang.tuwien.ac.at Most things have to be believed to be seen
http://www.complang.tuwien.ac.at/anton/home.html
h***@bbs.cpcn.com
2013-02-28 18:51:55 UTC
Permalink
Post by Anton Ertl
The IBM S/360 introduced byte
addressing and the 8-bit byte,
While the byte was the basic working unit of the IBM S/360 family, one
could address storage as either bytes or words (half word, word,
doubleword). The general registers held 16 bits and there were
floating point registers.
John Levine
2013-02-28 19:02:13 UTC
Permalink
Post by h***@bbs.cpcn.com
The IBM S/360 introduced byte addressing and the 8-bit byte,
While the byte was the basic working unit of the IBM S/360 family, one
could address storage as either bytes or words (half word, word,
doubleword). The general registers held 16 bits and there were
floating point registers.
Not really. All addressing was byte addressing. There were halfword,
word, and doubleword instructions, whose operands had to have the low
bit, two bits, or three bits zero (a restriction later removed by the
370's byte aligned operaned feature.)

Except on the 360/20, the general registers were 32 bits.
--
Regards,
John Levine, ***@iecc.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. http://jl.ly
Quadibloc
2013-02-28 19:07:25 UTC
Permalink
Post by h***@bbs.cpcn.com
Post by Anton Ertl
The IBM S/360 introduced byte
addressing and the 8-bit byte,
While the byte was the basic working unit of the IBM S/360 family, one
could address storage as either bytes or words (half word, word,
doubleword).  The general registers held 16 bits and there were
floating point registers.
Yes, but addresses were *in units of bytes* even if they had to be
aligned on halfword, word, or doubleword boundaries, and they referred
to objects of those lengths.

John Savard
Quadibloc
2013-02-28 19:06:34 UTC
Permalink
Post by Anton Ertl
Well, given an unspecified link in an unspecified Wikipedia page, I
have no basus for agreeing or disagreeing with your evaluation.
That's fair. Here's the page:

http://en.wikipedia.org/wiki/CDC_3000

and here's the link to an installment from the "Bizarre Architecture"
lecture series at the University of Massachussetts:

http://people.cs.umass.edu/~verts/cmpsci201/spr_2004/Lecture_33_2004-04-30_CDC-3300_and_6000.pdf

I think it's clear that since he was giving a week's worth of
lectures, he had to include computers that were mildly unconventional
by _today's_ standards in his list to pad it out.

To me, "bizarre" would have to mean something significantly
unconventional, not something that happened to use a basic storage
unit only large enough to hold the character set that people thought
computers needed back then.

John Savard
Ivan Godard
2013-02-28 19:31:00 UTC
Permalink
Post by Quadibloc
To me, "bizarre" would have to mean something significantly
unconventional, not something that happened to use a basic storage
unit only large enough to hold the character set that people thought
computers needed back then.
One of my favorites was the Astranautics ZS-1, the first commercial
machine with a decoupled access-execute architecture. I put something
similar in the Harris Supercomputer, which sadly (for me; very wisely
for Harris) never got built.

Ivan
EricP
2013-02-28 19:49:46 UTC
Permalink
Post by Quadibloc
To me, "bizarre" would have to mean something significantly
unconventional, not something that happened to use a basic storage
unit only large enough to hold the character set that people thought
computers needed back then.
I think the RCA 1802's "set a register to select which general register
is the program counter (oh and leave out call and return instructions)"
certainly qualifies. I've never figured a reason that would possibly
justify this architecture decision.

call and ret do make the state sequencer more complicated,
but not that much more.

Also was just reading about the PDP 8's DCA Deposit (store to memory)
and Clear Accumulator instruction. If you did not want to clear
the accumulator, you had to add it back in again afterwards.

Eric
Peter Flass
2013-02-28 20:30:50 UTC
Permalink
Post by EricP
Post by Quadibloc
To me, "bizarre" would have to mean something significantly
unconventional, not something that happened to use a basic storage
unit only large enough to hold the character set that people thought
computers needed back then.
I think the RCA 1802's "set a register to select which general register
is the program counter (oh and leave out call and return instructions)"
certainly qualifies. I've never figured a reason that would possibly
justify this architecture decision.
call and ret do make the state sequencer more complicated,
but not that much more.
Also was just reading about the PDP 8's DCA Deposit (store to memory)
and Clear Accumulator instruction. If you did not want to clear
the accumulator, you had to add it back in again afterwards.
I thought the -8 was bizarre in general, justified only by it's use of
discrete transistor logic.
--
Pete
Quadibloc
2013-03-01 02:41:53 UTC
Permalink
Post by EricP
Also was just reading about the PDP 8's DCA Deposit (store to memory)
and Clear Accumulator instruction. If you did not want to clear
the accumulator, you had to add it back in again afterwards.
That was its one eccentricity, explained by the need to reserve an
opcode for I/O: TAD and DCA let you do everything you could do with
TAD, LDA, and STO. (It was TAD instead of ADD for historical reasons:
the PDP-9 and its relatives had opcodes for both one's complement and
two's complement arithmetic. Now, _that's_ weird.)

John Savard
Shmuel (Seymour J.) Metz
2013-02-28 19:02:06 UTC
Permalink
Post by Quadibloc
Anyways, in a search for more information, I came to a Wikipedia page
with a link to a PDF of a lecture from a series about "bizarre
architectures", which covered the Control Data 3600 and 6600. They
apparently were bizarre because they had 6-bit characters instead of 8-
bit characters.
I'll have to say that I found that hyperbolic.
I found it stark raving bonkers. Essentially[1] every binary computer
of that era used 6-bit characters.
The IBM S/360 introduced byte addressing and the 8-bit byte,
No, it introduced the "any size that you want as long as it's eight"
in place of the byte addressing on the earlier 7030.

[1] The IBM 7030 (Stretch) is the only exception that comes to
mind.
--
Shmuel (Seymour J.) Metz, SysProg and JOAT <http://patriot.net/~shmuel>

Unsolicited bulk E-mail subject to legal action. I reserve the
right to publicly post or ridicule any abusive E-mail. Reply to
domain Patriot dot net user shmuel+news to contact me. Do not
reply to ***@library.lspace.org
Loading...