How IBM gambled and failed to standardize ASCII on the IBM System/360

02.12.2025 — In Mainframe

Computers work with text by encoding characters in an alphabet as numbers.

There are several natural questions when considering character encoding:

  • What alphabet would we like to encode?
  • Do we care about things like lowercase vs uppercase?
  • Which number should represent which character?

During the 1950s, computer manufacturers tended to create their own character encodings from scratch. A UNIVAC represented text differently than an IBM 704. Some customers, such as the US Army Signal Corps, threw their hands up at the situation and created their own character encoding.

Around this time, IBM began research and development on their 7030 Stretch supercomputer, so named because it was intended to "stretch the limits of computer performance" at the time. One of the novel innovations this introduced was memory addressable as 8-bit words, called "bytes." This is now standard on all computer architectures.

Given that existing character encodings used 6-bits and only supported uppercase characters, this wider "byte" enabled the idea of a new "extended character set." Bob Bemer, an IBM engineer on the Stretch project, saw this as an opportunity to improve intercommunication between computers, and he published a call for a standard in the Communications of the ACM in 1959.

Over the course of several years, the X3 committee of the American National Standards Institute (ANSI) labored on this task, resulting in the American Standard Code for Information Interchange (ASCII) standard as of June 17, 1963.

One of IBM's hopes for ASCII was to be able to use it as the character encoding standard for the new IBM System/360 mainframe family, planned for release in 1964. Given that this would become the first computer family with common assembly language and broad binary compatibility, this would be an opportune time to migrate customers from the existing IBM character encoding scheme towards the new standard.

Unfortunately, as the standardization process of the ASCII committee dragged on, it seemed that ASCII might not be completed in time for the System/360 release. IBM engineers thus worked on an alternative encoding scheme that extended the existing 6-bit "Binary-Coded Decimal Interchange Code" in a forward compatible way, yielding the alternative EBCDIC encoding.

The end of the story is best left to a snippet of a May 1998 Dr. Dobb's Journal interview with Bob Bemer:

Don't ASCII, Don't Telly

DDJ: When I think about the standards you are responsible for and the projects you have worked on in your career, it seems that the name "Bob Bemer" ought to be a household word among programmers. But I suspect that's not the case. Maybe after this interview, that will change a little.

You're known, among those who have heard of you, as the inventor of ASCII, the inventor of the escape sequence, the guy who named Cobol, a pioneer in word processing and time sharing and international standards for data processing. I want to hear about your Year 2000 solution, but I'd also like to know how you came to invent ASCII.

BB: I made a survey in 1960 [while working for IBM] and found out there were over 60 different ways the alphabet was coded for various computers. So I started to pick out the problem of interchanging files, and I started making proposals for a single code. Before that I did the character set for the Stretch machine at Los Alamos. That was the first eight-bit-byte computer that I know of. But I made a mistake. I put the alphabet in as capital A, lowercase a, capital B, lowercase b. And that was stupid. It was then that I wound up with the escape sequence idea that I published sometime in 1960.

DDJ: But ASCII became more than an IBM standard. How did the internationalization of ASCII come about?

BB: I was invited to talk to the British Standards Institution and I got to go to the electronic industry association. And finally I was called by two IBM vice presidents. They said they would like to revitalize what was then called the OEMI -- the Office Equipment Manufacturer's Institute -- and they wanted proposals from me for what should be done in the way of computer standards. We had a big meeting, the first meeting of X3 under ANSI auspices. And that later became an ISO committee when we went international with the standards for computers.

Then came the fateful day. We had ASCII going. We were about to sign off and the 360 was about to get out. And Freddie Brooks tells me that the printers and punches are not ready in ASCII.

DDJ: They were still designed for EBCDIC?

BB: Right. And one manager, now dead (but I won't say "God rest his soul") decided they were going to do both. They would put in p bit, and if the p bit was 0, the machine would run in EBCDIC. If the p bit was 1, it would run in ASCII. He thought that was a reasonable way to solve the problem, because he had to announce the 360 in a hurry. So they did. Unfortunately nobody told the programmers, and they did all their systems programming in EBCDIC. As a result, they couldn't make the thing run in ASCII. So ASCII originated at IBM, but they didn't follow through with it. Isn't that a crazy story?

The net result was that ASCII did become the industry-standard encoding, forming the basis for Unicode in later decades. However, shipping IBM mainframes with EBCDIC caused IBM mainframe customers to propagate forward the character encoding they were hoping to replace. Given that System/360 guaranteed to be indefinitely forward compatible, mainframe software often uses EBCDIC to this day. If only the standards committee had moved a bit faster or if only IBM had reallocated resources a bit in the last year leading to the System/360 release, things might be a lot simpler today.

Given that the ASCII character set generally defines the characters used for programming language source code, programmers often wonder how ASCII ended up with punctuation like the "curly braces" ' and '. If we didn't have those, it's much more likely that modern languages would have carried forward 'BEGIN' and 'END' from ALGOL. If this topic seems interesting to you, check out "The Great Curly Brace Trace Chase" by Bob Bemer at https://web.archive.org/web/20090604210339/http://home.ccil.org/~remlaps/www.bobbemer.com/BRACES.HTM

Sources

  1. "A proposal for a generalized card code for 256 characters" by Bob Bemer in Communications of the ACM, Volume 2, Issue 9, Pages 19 - 23. Available at https://dl.acm.org/doi/10.1145/368424.368435

  2. "A Chat with Bob Bemer" by Michael Swaine in Dr. Dobb's Journal May 1998. Available at https://jacobfilipp.com/DrDobbs/articles/DDJ/1998/9805/9805m/9805m.htm

© 2018 by Sean McBride. All rights reserved.
Last build: 15.03.2026