Why are PDP-7-style microprogrammed instructions out of vogue? Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern)How were Zuse Z22 Instructions Encoded?Why did the PDP-11 include a JMP instruction?What are some programs known to take advantage of Intel MMX instructions?What is the relative code density of 8-bit microprocessors?PDP-11 instruction set inconsistenciesAre there any articles elucidating the history of the POPCOUNT instruction?Intel 8080 - Behaviour of the carry bit when comparing a value with 0Why does the Z80 include the RLD and RRD instructions?What is the purpose of the ω register of the БЭСМ-6?How can floating point addition be so slow on a BESM-6?What was the main purpose of bitshift instructions in CPU?

Echoing a tail command produces unexpected output?

In predicate logic, does existential quantification (∃) include universal quantification (∀), i.e. can 'some' imply 'all'?

Do I really need recursive chmod to restrict access to a folder?

Why is my conclusion inconsistent with the van't Hoff equation?

The logistics of corpse disposal

Extract all GPU name, model and GPU ram

Why am I getting the error "non-boolean type specified in a context where a condition is expected" for this request?

Fundamental Solution of the Pell Equation

How to bypass password on Windows XP account?

ListPlot join points by nearest neighbor rather than order

What is the logic behind the Maharil's explanation of why we don't say שעשה ניסים on Pesach?

Can a non-EU citizen traveling with me come with me through the EU passport line?

Single word antonym of "flightless"

Resolving to minmaj7

2001: A Space Odyssey's use of the song "Daisy Bell" (Bicycle Built for Two); life imitates art or vice-versa?

Why aren't air breathing engines used as small first stages

Why was the term "discrete" used in discrete logarithm?

What's the purpose of writing one's academic biography in the third person?

How do I keep my slimes from escaping their pens?

Abandoning the Ordinary World

How to tell that you are a giant?

What does this icon in iOS Stardew Valley mean?

Using et al. for a last / senior author rather than for a first author

Why are there no cargo aircraft with "flying wing" design?



Why are PDP-7-style microprogrammed instructions out of vogue?



Announcing the arrival of Valued Associate #679: Cesar Manara
Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern)How were Zuse Z22 Instructions Encoded?Why did the PDP-11 include a JMP instruction?What are some programs known to take advantage of Intel MMX instructions?What is the relative code density of 8-bit microprocessors?PDP-11 instruction set inconsistenciesAre there any articles elucidating the history of the POPCOUNT instruction?Intel 8080 - Behaviour of the carry bit when comparing a value with 0Why does the Z80 include the RLD and RRD instructions?What is the purpose of the ω register of the БЭСМ-6?How can floating point addition be so slow on a BESM-6?What was the main purpose of bitshift instructions in CPU?










9















DEC, and at least some of their computers, especially those in the 18-bit family and 12-bit family, had these opr instructions, which contained many bitfields which encoded something like "subinstructions". Things like



  • clear the accumulator

  • increment the accumulator

  • rotate the accumulator one place leftward

  • complement the accumulator

  • skip if the accumulator is zero

The nature of these simple operations is such that it's convenient to encode each one in some bit or bitfield in the instruction word, and to have the computer execute each one in a statically scheduled manner. My understanding is that's because they are often used together1, and have simple encodings.



A later computer like the Z80 or ARM7 needs to fetch, decode and execute a separate instruction to perform each of these operations, which might not be as space or time efficient.



From what I can tell, using DEC-style microcoded instructions to perform any number of simple operations of a single register, has fallen out of vogue, or are at least not nearly as common on modern instruction set architectures. Why is this?




1: Not only to load small integers into the accumulator, as in cla cll cml rtl inc to set the accumulator to 3 on the PDP-8, but also for examining or manipulating bitfields, probably long division, etc.










share|improve this question






















  • What you're describing are VLIW architectures - except the question you have is quite unclear - adding 'Why this' to a description isn't exactly a question.

    – Raffzahn
    Apr 12 at 14:19











  • @Raffzahn I think I've identified a trend; I am asking if it's there, and if so, what's motivated it. My understanding of VLIW is that the operations are dyadic, or have variable transitivities, but on the PDP 7 et al., the operations were all strictly monadic.

    – Wilson
    Apr 12 at 14:26






  • 1





    This is a little off topic. The DEC PDP-6 had 16 variations on the Boolean operations. It used four bits out of the opcode field to specify a truth table for the corresponding Boolean operation. Thus it was able to implement 16 operations with about the same logic that it would have taken to implement just one.

    – Walter Mitty
    Apr 12 at 14:36












  • @Wilson VLIW is not intrinsic tied to any kind of operation. The basic idea is that there is no (general) decoding, but each function unit that can be initiated separate will get it's own mark in the instruction field. THus the decoder stage can be removed - or at least quite simplified.

    – Raffzahn
    Apr 12 at 14:48






  • 1





    Yes these opcodes are in the PDP-10 as well. Open the opcode list and take a close look at opcodes 400-477. If you convert the opcodes from octal to binary, you will find four bits that provide a truth table for the operation in question. SETZ has all four of these bits set to zero, and SETO has all four set to one. AND has three zeroes and a one.

    – Walter Mitty
    Apr 13 at 13:36
















9















DEC, and at least some of their computers, especially those in the 18-bit family and 12-bit family, had these opr instructions, which contained many bitfields which encoded something like "subinstructions". Things like



  • clear the accumulator

  • increment the accumulator

  • rotate the accumulator one place leftward

  • complement the accumulator

  • skip if the accumulator is zero

The nature of these simple operations is such that it's convenient to encode each one in some bit or bitfield in the instruction word, and to have the computer execute each one in a statically scheduled manner. My understanding is that's because they are often used together1, and have simple encodings.



A later computer like the Z80 or ARM7 needs to fetch, decode and execute a separate instruction to perform each of these operations, which might not be as space or time efficient.



From what I can tell, using DEC-style microcoded instructions to perform any number of simple operations of a single register, has fallen out of vogue, or are at least not nearly as common on modern instruction set architectures. Why is this?




1: Not only to load small integers into the accumulator, as in cla cll cml rtl inc to set the accumulator to 3 on the PDP-8, but also for examining or manipulating bitfields, probably long division, etc.










share|improve this question






















  • What you're describing are VLIW architectures - except the question you have is quite unclear - adding 'Why this' to a description isn't exactly a question.

    – Raffzahn
    Apr 12 at 14:19











  • @Raffzahn I think I've identified a trend; I am asking if it's there, and if so, what's motivated it. My understanding of VLIW is that the operations are dyadic, or have variable transitivities, but on the PDP 7 et al., the operations were all strictly monadic.

    – Wilson
    Apr 12 at 14:26






  • 1





    This is a little off topic. The DEC PDP-6 had 16 variations on the Boolean operations. It used four bits out of the opcode field to specify a truth table for the corresponding Boolean operation. Thus it was able to implement 16 operations with about the same logic that it would have taken to implement just one.

    – Walter Mitty
    Apr 12 at 14:36












  • @Wilson VLIW is not intrinsic tied to any kind of operation. The basic idea is that there is no (general) decoding, but each function unit that can be initiated separate will get it's own mark in the instruction field. THus the decoder stage can be removed - or at least quite simplified.

    – Raffzahn
    Apr 12 at 14:48






  • 1





    Yes these opcodes are in the PDP-10 as well. Open the opcode list and take a close look at opcodes 400-477. If you convert the opcodes from octal to binary, you will find four bits that provide a truth table for the operation in question. SETZ has all four of these bits set to zero, and SETO has all four set to one. AND has three zeroes and a one.

    – Walter Mitty
    Apr 13 at 13:36














9












9








9


1






DEC, and at least some of their computers, especially those in the 18-bit family and 12-bit family, had these opr instructions, which contained many bitfields which encoded something like "subinstructions". Things like



  • clear the accumulator

  • increment the accumulator

  • rotate the accumulator one place leftward

  • complement the accumulator

  • skip if the accumulator is zero

The nature of these simple operations is such that it's convenient to encode each one in some bit or bitfield in the instruction word, and to have the computer execute each one in a statically scheduled manner. My understanding is that's because they are often used together1, and have simple encodings.



A later computer like the Z80 or ARM7 needs to fetch, decode and execute a separate instruction to perform each of these operations, which might not be as space or time efficient.



From what I can tell, using DEC-style microcoded instructions to perform any number of simple operations of a single register, has fallen out of vogue, or are at least not nearly as common on modern instruction set architectures. Why is this?




1: Not only to load small integers into the accumulator, as in cla cll cml rtl inc to set the accumulator to 3 on the PDP-8, but also for examining or manipulating bitfields, probably long division, etc.










share|improve this question














DEC, and at least some of their computers, especially those in the 18-bit family and 12-bit family, had these opr instructions, which contained many bitfields which encoded something like "subinstructions". Things like



  • clear the accumulator

  • increment the accumulator

  • rotate the accumulator one place leftward

  • complement the accumulator

  • skip if the accumulator is zero

The nature of these simple operations is such that it's convenient to encode each one in some bit or bitfield in the instruction word, and to have the computer execute each one in a statically scheduled manner. My understanding is that's because they are often used together1, and have simple encodings.



A later computer like the Z80 or ARM7 needs to fetch, decode and execute a separate instruction to perform each of these operations, which might not be as space or time efficient.



From what I can tell, using DEC-style microcoded instructions to perform any number of simple operations of a single register, has fallen out of vogue, or are at least not nearly as common on modern instruction set architectures. Why is this?




1: Not only to load small integers into the accumulator, as in cla cll cml rtl inc to set the accumulator to 3 on the PDP-8, but also for examining or manipulating bitfields, probably long division, etc.







instruction-set microcode






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Apr 12 at 13:51









WilsonWilson

12.7k658145




12.7k658145












  • What you're describing are VLIW architectures - except the question you have is quite unclear - adding 'Why this' to a description isn't exactly a question.

    – Raffzahn
    Apr 12 at 14:19











  • @Raffzahn I think I've identified a trend; I am asking if it's there, and if so, what's motivated it. My understanding of VLIW is that the operations are dyadic, or have variable transitivities, but on the PDP 7 et al., the operations were all strictly monadic.

    – Wilson
    Apr 12 at 14:26






  • 1





    This is a little off topic. The DEC PDP-6 had 16 variations on the Boolean operations. It used four bits out of the opcode field to specify a truth table for the corresponding Boolean operation. Thus it was able to implement 16 operations with about the same logic that it would have taken to implement just one.

    – Walter Mitty
    Apr 12 at 14:36












  • @Wilson VLIW is not intrinsic tied to any kind of operation. The basic idea is that there is no (general) decoding, but each function unit that can be initiated separate will get it's own mark in the instruction field. THus the decoder stage can be removed - or at least quite simplified.

    – Raffzahn
    Apr 12 at 14:48






  • 1





    Yes these opcodes are in the PDP-10 as well. Open the opcode list and take a close look at opcodes 400-477. If you convert the opcodes from octal to binary, you will find four bits that provide a truth table for the operation in question. SETZ has all four of these bits set to zero, and SETO has all four set to one. AND has three zeroes and a one.

    – Walter Mitty
    Apr 13 at 13:36


















  • What you're describing are VLIW architectures - except the question you have is quite unclear - adding 'Why this' to a description isn't exactly a question.

    – Raffzahn
    Apr 12 at 14:19











  • @Raffzahn I think I've identified a trend; I am asking if it's there, and if so, what's motivated it. My understanding of VLIW is that the operations are dyadic, or have variable transitivities, but on the PDP 7 et al., the operations were all strictly monadic.

    – Wilson
    Apr 12 at 14:26






  • 1





    This is a little off topic. The DEC PDP-6 had 16 variations on the Boolean operations. It used four bits out of the opcode field to specify a truth table for the corresponding Boolean operation. Thus it was able to implement 16 operations with about the same logic that it would have taken to implement just one.

    – Walter Mitty
    Apr 12 at 14:36












  • @Wilson VLIW is not intrinsic tied to any kind of operation. The basic idea is that there is no (general) decoding, but each function unit that can be initiated separate will get it's own mark in the instruction field. THus the decoder stage can be removed - or at least quite simplified.

    – Raffzahn
    Apr 12 at 14:48






  • 1





    Yes these opcodes are in the PDP-10 as well. Open the opcode list and take a close look at opcodes 400-477. If you convert the opcodes from octal to binary, you will find four bits that provide a truth table for the operation in question. SETZ has all four of these bits set to zero, and SETO has all four set to one. AND has three zeroes and a one.

    – Walter Mitty
    Apr 13 at 13:36

















What you're describing are VLIW architectures - except the question you have is quite unclear - adding 'Why this' to a description isn't exactly a question.

– Raffzahn
Apr 12 at 14:19





What you're describing are VLIW architectures - except the question you have is quite unclear - adding 'Why this' to a description isn't exactly a question.

– Raffzahn
Apr 12 at 14:19













@Raffzahn I think I've identified a trend; I am asking if it's there, and if so, what's motivated it. My understanding of VLIW is that the operations are dyadic, or have variable transitivities, but on the PDP 7 et al., the operations were all strictly monadic.

– Wilson
Apr 12 at 14:26





@Raffzahn I think I've identified a trend; I am asking if it's there, and if so, what's motivated it. My understanding of VLIW is that the operations are dyadic, or have variable transitivities, but on the PDP 7 et al., the operations were all strictly monadic.

– Wilson
Apr 12 at 14:26




1




1





This is a little off topic. The DEC PDP-6 had 16 variations on the Boolean operations. It used four bits out of the opcode field to specify a truth table for the corresponding Boolean operation. Thus it was able to implement 16 operations with about the same logic that it would have taken to implement just one.

– Walter Mitty
Apr 12 at 14:36






This is a little off topic. The DEC PDP-6 had 16 variations on the Boolean operations. It used four bits out of the opcode field to specify a truth table for the corresponding Boolean operation. Thus it was able to implement 16 operations with about the same logic that it would have taken to implement just one.

– Walter Mitty
Apr 12 at 14:36














@Wilson VLIW is not intrinsic tied to any kind of operation. The basic idea is that there is no (general) decoding, but each function unit that can be initiated separate will get it's own mark in the instruction field. THus the decoder stage can be removed - or at least quite simplified.

– Raffzahn
Apr 12 at 14:48





@Wilson VLIW is not intrinsic tied to any kind of operation. The basic idea is that there is no (general) decoding, but each function unit that can be initiated separate will get it's own mark in the instruction field. THus the decoder stage can be removed - or at least quite simplified.

– Raffzahn
Apr 12 at 14:48




1




1





Yes these opcodes are in the PDP-10 as well. Open the opcode list and take a close look at opcodes 400-477. If you convert the opcodes from octal to binary, you will find four bits that provide a truth table for the operation in question. SETZ has all four of these bits set to zero, and SETO has all four set to one. AND has three zeroes and a one.

– Walter Mitty
Apr 13 at 13:36






Yes these opcodes are in the PDP-10 as well. Open the opcode list and take a close look at opcodes 400-477. If you convert the opcodes from octal to binary, you will find four bits that provide a truth table for the operation in question. SETZ has all four of these bits set to zero, and SETO has all four set to one. AND has three zeroes and a one.

– Walter Mitty
Apr 13 at 13:36











2 Answers
2






active

oldest

votes


















11















[...] had these opr instructions, which contained many bitfields which encoded something like "subinstructions"[...]




What you describe is basically a (V)LIW instruction format - at least that's what it might be called today. That's what computers started out with. Separate bits for each function to be applied to the value addressed.



The DEC is somewhat of a bad example here, as its accumulator instructions are a special kind, already a bastard between clean all over LIW and dedicated encoding. The LIW aspect is used only for this accumulator subset.



Zuse's machines, like the Z22, might make a better example with their ability to have each and every instruction carry multiple operations.




A later computer like the Z80 or ARM7 needs to fetch, decode and execute a separate instruction to perform each of these operations,




Yes - and no. For one, not all possible combinations could be used together, resulting in illegal instructions. In fact, depending on the machine's construction, most of these combinations were illegal. And that's why dedicated instructions took over. Let's assume, there are like 8 different operational units in the data path. Having one bit for each in the instruction word makes easy decoding, as each would just be wired up with the enable for a single function, resulting in a fast and simple machine structure.



Of these 256 combinations (of which one would be a nop), many would not make sense - think shifting left and shifting right, or adding and subtracting at the same time. By encoding only the 20 useful combinations into a 5 bit field, 3 bits (almost half) could be freed - at the cost of an additional decoding stage.



Now, back in the old times, when machines were word-orientated (e.g. 36 bits in one word), there was much space - even resulting in unused bits. No need to add a decoding stage. Even worse, doing so would slow down the execution. Well, only a bit, but it would.



The situation changed when machines became byte-orientated and variable length instruction formats were used. Here cramping down the 8 unit lines into a single encoded 5-bit field enabled it to squeeze into a byte while leaving room for more (like a register number), without the need to fetch two bytes. Heck, it even leaves 12x8 instruction points for other encodings/irregular instructions without needing more.




which might not be as space or time efficient.




That's partially true for the time efficiency, but not space - space-wise it's an extreme saving enabling more compact code. The inner workings are (can be) still (mostly) the same, but less visible. Instead of setting a shift and an add bit, there's now a Add-And-Shift instruction.



Then again, by now encoding it into a single byte instead of a full 36 bit word, the CPU can fetch the instructions at the same speed (byte bus vs. word bus) or even 4 times the speed (word sized bus) than before. So with memory always being the slowest part, tighter encoding does not only save space, but also speeds up execution - despite the additional decoding stage.




From what I can tell, [this] has fallen out of vogue, or are at least not nearly as common on modern instruction set architectures.




Not nearly as common on the surface is maybe the point here. For one, explicit VLIW instructions are still a thing (think Itanium), but more importantly, they are always an option for internal workings of modern CPUs. Where 'traditional' code gets first decoded into sub-operations, and these later get either combined to LIW instructions again, or scheduled in parallel over different function units.



In fact, the mentioned ARM makes another good point for it to vanish. ARM had traditionally the ability to have every instruction being executed conditionally (much like Zuse did first). Cool when thinking in sequential execution, but a gigantic hurdle when it comes to modern CPUs with the ability to reorder instructions according to available data and function units. It makes rescheduling not just a hard task, but almost impossible. Even worse, ARM featured DEC-like condition handling, where each and every load did change the flags.



Bottom line: Just because something isn't (always) visible to the user-side programmer, doesn't mean it isn't there.






share|improve this answer




















  • 1





    The Transmeta CPUs were a somewhat recent example of CPUs that used a (proprietary) VLIW instruction set internally, and another completely different one (namely x86) externally. In the Itanium, the VLIW bundles have explicit parallelism semantics (Intel calls this Explicit Parallel Instruction Computing (EPIC)), i.e. one VLIW bundle is 2 64 bit words with 3 41 bit instructions and a 5 bit "template" that tells the CPU what kinds of instructions the three instructions are and what the data dependencies are.

    – Jörg W Mittag
    Apr 13 at 7:06











  • Can you share (by way of a link to the reference or by adding to you answer) an example of how the Z22 instruction format allowed more operations to be specified in a single word? It must be completely unlike the earlier Z4 if that's the case.

    – Wilson
    Apr 13 at 12:03






  • 1





    I tried to google it but my German is really quite bad by now.

    – Wilson
    Apr 13 at 12:04











  • @Wilson Ouch. Ok, to start with, Z4 is an original Zuse (the man himself) design, while the Z22 was imagined by Theodor Fromme (call it design lead) with much help from Heinz Zemanek and Rudolf Bodo both designed the Mailüfterl and made the schematics for the Z22. The idea was to design the tube based Z22 in a way that it may be transistorized later. Which happend with the Z23. Which is a quite remarkable planing at that time. ... more to follow

    – Raffzahn
    Apr 13 at 13:52







  • 2





    @Wilson you asked for it: how-were Zuse Z22 Instructions Encoded? ... wasted another perfect good day - even included some German for you to test your knowledge :))

    – Raffzahn
    Apr 14 at 0:10


















14














The PDP-7 was a one address machine. All instructions occupied 18 bits. The operations that manipulated the accumulator didn't reference memory, and therefore didn't need an address. But the address bits were in the instruction anyway, because all instructions were encoded in an 18 bit word. So why not use these unused bits to get more use out of the instruction bits?



Once you get to opcodes with a variable number of operand addresses, the need to economize in this way goes away.






share|improve this answer


















  • 2





    To add to this, the PDP-7 is from an era when it was common for the width of the address bus to be less than the width of the data bus. In this case, you could fit a full 13-bit address into an 18-bit instruction word, which meant that you could pack an entire instruction (including the operand address) into a single word. Compare this to a CPU like the 6502 with 8-bit words and 16-bit addresses: if you can't fit an address into an instruction word then naturally they must come in extra bytes that follow the opcode byte. (continued)

    – Ken Gober
    Apr 13 at 14:56






  • 2





    The flip side of being able to fit the address into the instruction word was that you wasted a lot of bits for instructions that did not need an operand address or jump address. So the PDP-7 style sub-instructions were essentially a way to use unused bits in the instruction word to encode additional instructions, allowing many more instructions to be added without the cost of widening the word size, the only caveat being that the extra instructions had to be ones that didn't need to include an address.

    – Ken Gober
    Apr 13 at 15:00






  • 2





    @KenGober, I think you and I are saying the same thing, in different words. Thanks for adding a little clarity.

    – Walter Mitty
    Apr 13 at 19:36











Your Answer








StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "648"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fretrocomputing.stackexchange.com%2fquestions%2f9666%2fwhy-are-pdp-7-style-microprogrammed-instructions-out-of-vogue%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























2 Answers
2






active

oldest

votes








2 Answers
2






active

oldest

votes









active

oldest

votes






active

oldest

votes









11















[...] had these opr instructions, which contained many bitfields which encoded something like "subinstructions"[...]




What you describe is basically a (V)LIW instruction format - at least that's what it might be called today. That's what computers started out with. Separate bits for each function to be applied to the value addressed.



The DEC is somewhat of a bad example here, as its accumulator instructions are a special kind, already a bastard between clean all over LIW and dedicated encoding. The LIW aspect is used only for this accumulator subset.



Zuse's machines, like the Z22, might make a better example with their ability to have each and every instruction carry multiple operations.




A later computer like the Z80 or ARM7 needs to fetch, decode and execute a separate instruction to perform each of these operations,




Yes - and no. For one, not all possible combinations could be used together, resulting in illegal instructions. In fact, depending on the machine's construction, most of these combinations were illegal. And that's why dedicated instructions took over. Let's assume, there are like 8 different operational units in the data path. Having one bit for each in the instruction word makes easy decoding, as each would just be wired up with the enable for a single function, resulting in a fast and simple machine structure.



Of these 256 combinations (of which one would be a nop), many would not make sense - think shifting left and shifting right, or adding and subtracting at the same time. By encoding only the 20 useful combinations into a 5 bit field, 3 bits (almost half) could be freed - at the cost of an additional decoding stage.



Now, back in the old times, when machines were word-orientated (e.g. 36 bits in one word), there was much space - even resulting in unused bits. No need to add a decoding stage. Even worse, doing so would slow down the execution. Well, only a bit, but it would.



The situation changed when machines became byte-orientated and variable length instruction formats were used. Here cramping down the 8 unit lines into a single encoded 5-bit field enabled it to squeeze into a byte while leaving room for more (like a register number), without the need to fetch two bytes. Heck, it even leaves 12x8 instruction points for other encodings/irregular instructions without needing more.




which might not be as space or time efficient.




That's partially true for the time efficiency, but not space - space-wise it's an extreme saving enabling more compact code. The inner workings are (can be) still (mostly) the same, but less visible. Instead of setting a shift and an add bit, there's now a Add-And-Shift instruction.



Then again, by now encoding it into a single byte instead of a full 36 bit word, the CPU can fetch the instructions at the same speed (byte bus vs. word bus) or even 4 times the speed (word sized bus) than before. So with memory always being the slowest part, tighter encoding does not only save space, but also speeds up execution - despite the additional decoding stage.




From what I can tell, [this] has fallen out of vogue, or are at least not nearly as common on modern instruction set architectures.




Not nearly as common on the surface is maybe the point here. For one, explicit VLIW instructions are still a thing (think Itanium), but more importantly, they are always an option for internal workings of modern CPUs. Where 'traditional' code gets first decoded into sub-operations, and these later get either combined to LIW instructions again, or scheduled in parallel over different function units.



In fact, the mentioned ARM makes another good point for it to vanish. ARM had traditionally the ability to have every instruction being executed conditionally (much like Zuse did first). Cool when thinking in sequential execution, but a gigantic hurdle when it comes to modern CPUs with the ability to reorder instructions according to available data and function units. It makes rescheduling not just a hard task, but almost impossible. Even worse, ARM featured DEC-like condition handling, where each and every load did change the flags.



Bottom line: Just because something isn't (always) visible to the user-side programmer, doesn't mean it isn't there.






share|improve this answer




















  • 1





    The Transmeta CPUs were a somewhat recent example of CPUs that used a (proprietary) VLIW instruction set internally, and another completely different one (namely x86) externally. In the Itanium, the VLIW bundles have explicit parallelism semantics (Intel calls this Explicit Parallel Instruction Computing (EPIC)), i.e. one VLIW bundle is 2 64 bit words with 3 41 bit instructions and a 5 bit "template" that tells the CPU what kinds of instructions the three instructions are and what the data dependencies are.

    – Jörg W Mittag
    Apr 13 at 7:06











  • Can you share (by way of a link to the reference or by adding to you answer) an example of how the Z22 instruction format allowed more operations to be specified in a single word? It must be completely unlike the earlier Z4 if that's the case.

    – Wilson
    Apr 13 at 12:03






  • 1





    I tried to google it but my German is really quite bad by now.

    – Wilson
    Apr 13 at 12:04











  • @Wilson Ouch. Ok, to start with, Z4 is an original Zuse (the man himself) design, while the Z22 was imagined by Theodor Fromme (call it design lead) with much help from Heinz Zemanek and Rudolf Bodo both designed the Mailüfterl and made the schematics for the Z22. The idea was to design the tube based Z22 in a way that it may be transistorized later. Which happend with the Z23. Which is a quite remarkable planing at that time. ... more to follow

    – Raffzahn
    Apr 13 at 13:52







  • 2





    @Wilson you asked for it: how-were Zuse Z22 Instructions Encoded? ... wasted another perfect good day - even included some German for you to test your knowledge :))

    – Raffzahn
    Apr 14 at 0:10















11















[...] had these opr instructions, which contained many bitfields which encoded something like "subinstructions"[...]




What you describe is basically a (V)LIW instruction format - at least that's what it might be called today. That's what computers started out with. Separate bits for each function to be applied to the value addressed.



The DEC is somewhat of a bad example here, as its accumulator instructions are a special kind, already a bastard between clean all over LIW and dedicated encoding. The LIW aspect is used only for this accumulator subset.



Zuse's machines, like the Z22, might make a better example with their ability to have each and every instruction carry multiple operations.




A later computer like the Z80 or ARM7 needs to fetch, decode and execute a separate instruction to perform each of these operations,




Yes - and no. For one, not all possible combinations could be used together, resulting in illegal instructions. In fact, depending on the machine's construction, most of these combinations were illegal. And that's why dedicated instructions took over. Let's assume, there are like 8 different operational units in the data path. Having one bit for each in the instruction word makes easy decoding, as each would just be wired up with the enable for a single function, resulting in a fast and simple machine structure.



Of these 256 combinations (of which one would be a nop), many would not make sense - think shifting left and shifting right, or adding and subtracting at the same time. By encoding only the 20 useful combinations into a 5 bit field, 3 bits (almost half) could be freed - at the cost of an additional decoding stage.



Now, back in the old times, when machines were word-orientated (e.g. 36 bits in one word), there was much space - even resulting in unused bits. No need to add a decoding stage. Even worse, doing so would slow down the execution. Well, only a bit, but it would.



The situation changed when machines became byte-orientated and variable length instruction formats were used. Here cramping down the 8 unit lines into a single encoded 5-bit field enabled it to squeeze into a byte while leaving room for more (like a register number), without the need to fetch two bytes. Heck, it even leaves 12x8 instruction points for other encodings/irregular instructions without needing more.




which might not be as space or time efficient.




That's partially true for the time efficiency, but not space - space-wise it's an extreme saving enabling more compact code. The inner workings are (can be) still (mostly) the same, but less visible. Instead of setting a shift and an add bit, there's now a Add-And-Shift instruction.



Then again, by now encoding it into a single byte instead of a full 36 bit word, the CPU can fetch the instructions at the same speed (byte bus vs. word bus) or even 4 times the speed (word sized bus) than before. So with memory always being the slowest part, tighter encoding does not only save space, but also speeds up execution - despite the additional decoding stage.




From what I can tell, [this] has fallen out of vogue, or are at least not nearly as common on modern instruction set architectures.




Not nearly as common on the surface is maybe the point here. For one, explicit VLIW instructions are still a thing (think Itanium), but more importantly, they are always an option for internal workings of modern CPUs. Where 'traditional' code gets first decoded into sub-operations, and these later get either combined to LIW instructions again, or scheduled in parallel over different function units.



In fact, the mentioned ARM makes another good point for it to vanish. ARM had traditionally the ability to have every instruction being executed conditionally (much like Zuse did first). Cool when thinking in sequential execution, but a gigantic hurdle when it comes to modern CPUs with the ability to reorder instructions according to available data and function units. It makes rescheduling not just a hard task, but almost impossible. Even worse, ARM featured DEC-like condition handling, where each and every load did change the flags.



Bottom line: Just because something isn't (always) visible to the user-side programmer, doesn't mean it isn't there.






share|improve this answer




















  • 1





    The Transmeta CPUs were a somewhat recent example of CPUs that used a (proprietary) VLIW instruction set internally, and another completely different one (namely x86) externally. In the Itanium, the VLIW bundles have explicit parallelism semantics (Intel calls this Explicit Parallel Instruction Computing (EPIC)), i.e. one VLIW bundle is 2 64 bit words with 3 41 bit instructions and a 5 bit "template" that tells the CPU what kinds of instructions the three instructions are and what the data dependencies are.

    – Jörg W Mittag
    Apr 13 at 7:06











  • Can you share (by way of a link to the reference or by adding to you answer) an example of how the Z22 instruction format allowed more operations to be specified in a single word? It must be completely unlike the earlier Z4 if that's the case.

    – Wilson
    Apr 13 at 12:03






  • 1





    I tried to google it but my German is really quite bad by now.

    – Wilson
    Apr 13 at 12:04











  • @Wilson Ouch. Ok, to start with, Z4 is an original Zuse (the man himself) design, while the Z22 was imagined by Theodor Fromme (call it design lead) with much help from Heinz Zemanek and Rudolf Bodo both designed the Mailüfterl and made the schematics for the Z22. The idea was to design the tube based Z22 in a way that it may be transistorized later. Which happend with the Z23. Which is a quite remarkable planing at that time. ... more to follow

    – Raffzahn
    Apr 13 at 13:52







  • 2





    @Wilson you asked for it: how-were Zuse Z22 Instructions Encoded? ... wasted another perfect good day - even included some German for you to test your knowledge :))

    – Raffzahn
    Apr 14 at 0:10













11












11








11








[...] had these opr instructions, which contained many bitfields which encoded something like "subinstructions"[...]




What you describe is basically a (V)LIW instruction format - at least that's what it might be called today. That's what computers started out with. Separate bits for each function to be applied to the value addressed.



The DEC is somewhat of a bad example here, as its accumulator instructions are a special kind, already a bastard between clean all over LIW and dedicated encoding. The LIW aspect is used only for this accumulator subset.



Zuse's machines, like the Z22, might make a better example with their ability to have each and every instruction carry multiple operations.




A later computer like the Z80 or ARM7 needs to fetch, decode and execute a separate instruction to perform each of these operations,




Yes - and no. For one, not all possible combinations could be used together, resulting in illegal instructions. In fact, depending on the machine's construction, most of these combinations were illegal. And that's why dedicated instructions took over. Let's assume, there are like 8 different operational units in the data path. Having one bit for each in the instruction word makes easy decoding, as each would just be wired up with the enable for a single function, resulting in a fast and simple machine structure.



Of these 256 combinations (of which one would be a nop), many would not make sense - think shifting left and shifting right, or adding and subtracting at the same time. By encoding only the 20 useful combinations into a 5 bit field, 3 bits (almost half) could be freed - at the cost of an additional decoding stage.



Now, back in the old times, when machines were word-orientated (e.g. 36 bits in one word), there was much space - even resulting in unused bits. No need to add a decoding stage. Even worse, doing so would slow down the execution. Well, only a bit, but it would.



The situation changed when machines became byte-orientated and variable length instruction formats were used. Here cramping down the 8 unit lines into a single encoded 5-bit field enabled it to squeeze into a byte while leaving room for more (like a register number), without the need to fetch two bytes. Heck, it even leaves 12x8 instruction points for other encodings/irregular instructions without needing more.




which might not be as space or time efficient.




That's partially true for the time efficiency, but not space - space-wise it's an extreme saving enabling more compact code. The inner workings are (can be) still (mostly) the same, but less visible. Instead of setting a shift and an add bit, there's now a Add-And-Shift instruction.



Then again, by now encoding it into a single byte instead of a full 36 bit word, the CPU can fetch the instructions at the same speed (byte bus vs. word bus) or even 4 times the speed (word sized bus) than before. So with memory always being the slowest part, tighter encoding does not only save space, but also speeds up execution - despite the additional decoding stage.




From what I can tell, [this] has fallen out of vogue, or are at least not nearly as common on modern instruction set architectures.




Not nearly as common on the surface is maybe the point here. For one, explicit VLIW instructions are still a thing (think Itanium), but more importantly, they are always an option for internal workings of modern CPUs. Where 'traditional' code gets first decoded into sub-operations, and these later get either combined to LIW instructions again, or scheduled in parallel over different function units.



In fact, the mentioned ARM makes another good point for it to vanish. ARM had traditionally the ability to have every instruction being executed conditionally (much like Zuse did first). Cool when thinking in sequential execution, but a gigantic hurdle when it comes to modern CPUs with the ability to reorder instructions according to available data and function units. It makes rescheduling not just a hard task, but almost impossible. Even worse, ARM featured DEC-like condition handling, where each and every load did change the flags.



Bottom line: Just because something isn't (always) visible to the user-side programmer, doesn't mean it isn't there.






share|improve this answer
















[...] had these opr instructions, which contained many bitfields which encoded something like "subinstructions"[...]




What you describe is basically a (V)LIW instruction format - at least that's what it might be called today. That's what computers started out with. Separate bits for each function to be applied to the value addressed.



The DEC is somewhat of a bad example here, as its accumulator instructions are a special kind, already a bastard between clean all over LIW and dedicated encoding. The LIW aspect is used only for this accumulator subset.



Zuse's machines, like the Z22, might make a better example with their ability to have each and every instruction carry multiple operations.




A later computer like the Z80 or ARM7 needs to fetch, decode and execute a separate instruction to perform each of these operations,




Yes - and no. For one, not all possible combinations could be used together, resulting in illegal instructions. In fact, depending on the machine's construction, most of these combinations were illegal. And that's why dedicated instructions took over. Let's assume, there are like 8 different operational units in the data path. Having one bit for each in the instruction word makes easy decoding, as each would just be wired up with the enable for a single function, resulting in a fast and simple machine structure.



Of these 256 combinations (of which one would be a nop), many would not make sense - think shifting left and shifting right, or adding and subtracting at the same time. By encoding only the 20 useful combinations into a 5 bit field, 3 bits (almost half) could be freed - at the cost of an additional decoding stage.



Now, back in the old times, when machines were word-orientated (e.g. 36 bits in one word), there was much space - even resulting in unused bits. No need to add a decoding stage. Even worse, doing so would slow down the execution. Well, only a bit, but it would.



The situation changed when machines became byte-orientated and variable length instruction formats were used. Here cramping down the 8 unit lines into a single encoded 5-bit field enabled it to squeeze into a byte while leaving room for more (like a register number), without the need to fetch two bytes. Heck, it even leaves 12x8 instruction points for other encodings/irregular instructions without needing more.




which might not be as space or time efficient.




That's partially true for the time efficiency, but not space - space-wise it's an extreme saving enabling more compact code. The inner workings are (can be) still (mostly) the same, but less visible. Instead of setting a shift and an add bit, there's now a Add-And-Shift instruction.



Then again, by now encoding it into a single byte instead of a full 36 bit word, the CPU can fetch the instructions at the same speed (byte bus vs. word bus) or even 4 times the speed (word sized bus) than before. So with memory always being the slowest part, tighter encoding does not only save space, but also speeds up execution - despite the additional decoding stage.




From what I can tell, [this] has fallen out of vogue, or are at least not nearly as common on modern instruction set architectures.




Not nearly as common on the surface is maybe the point here. For one, explicit VLIW instructions are still a thing (think Itanium), but more importantly, they are always an option for internal workings of modern CPUs. Where 'traditional' code gets first decoded into sub-operations, and these later get either combined to LIW instructions again, or scheduled in parallel over different function units.



In fact, the mentioned ARM makes another good point for it to vanish. ARM had traditionally the ability to have every instruction being executed conditionally (much like Zuse did first). Cool when thinking in sequential execution, but a gigantic hurdle when it comes to modern CPUs with the ability to reorder instructions according to available data and function units. It makes rescheduling not just a hard task, but almost impossible. Even worse, ARM featured DEC-like condition handling, where each and every load did change the flags.



Bottom line: Just because something isn't (always) visible to the user-side programmer, doesn't mean it isn't there.







share|improve this answer














share|improve this answer



share|improve this answer








edited yesterday

























answered Apr 12 at 14:45









RaffzahnRaffzahn

56.5k6137228




56.5k6137228







  • 1





    The Transmeta CPUs were a somewhat recent example of CPUs that used a (proprietary) VLIW instruction set internally, and another completely different one (namely x86) externally. In the Itanium, the VLIW bundles have explicit parallelism semantics (Intel calls this Explicit Parallel Instruction Computing (EPIC)), i.e. one VLIW bundle is 2 64 bit words with 3 41 bit instructions and a 5 bit "template" that tells the CPU what kinds of instructions the three instructions are and what the data dependencies are.

    – Jörg W Mittag
    Apr 13 at 7:06











  • Can you share (by way of a link to the reference or by adding to you answer) an example of how the Z22 instruction format allowed more operations to be specified in a single word? It must be completely unlike the earlier Z4 if that's the case.

    – Wilson
    Apr 13 at 12:03






  • 1





    I tried to google it but my German is really quite bad by now.

    – Wilson
    Apr 13 at 12:04











  • @Wilson Ouch. Ok, to start with, Z4 is an original Zuse (the man himself) design, while the Z22 was imagined by Theodor Fromme (call it design lead) with much help from Heinz Zemanek and Rudolf Bodo both designed the Mailüfterl and made the schematics for the Z22. The idea was to design the tube based Z22 in a way that it may be transistorized later. Which happend with the Z23. Which is a quite remarkable planing at that time. ... more to follow

    – Raffzahn
    Apr 13 at 13:52







  • 2





    @Wilson you asked for it: how-were Zuse Z22 Instructions Encoded? ... wasted another perfect good day - even included some German for you to test your knowledge :))

    – Raffzahn
    Apr 14 at 0:10












  • 1





    The Transmeta CPUs were a somewhat recent example of CPUs that used a (proprietary) VLIW instruction set internally, and another completely different one (namely x86) externally. In the Itanium, the VLIW bundles have explicit parallelism semantics (Intel calls this Explicit Parallel Instruction Computing (EPIC)), i.e. one VLIW bundle is 2 64 bit words with 3 41 bit instructions and a 5 bit "template" that tells the CPU what kinds of instructions the three instructions are and what the data dependencies are.

    – Jörg W Mittag
    Apr 13 at 7:06











  • Can you share (by way of a link to the reference or by adding to you answer) an example of how the Z22 instruction format allowed more operations to be specified in a single word? It must be completely unlike the earlier Z4 if that's the case.

    – Wilson
    Apr 13 at 12:03






  • 1





    I tried to google it but my German is really quite bad by now.

    – Wilson
    Apr 13 at 12:04











  • @Wilson Ouch. Ok, to start with, Z4 is an original Zuse (the man himself) design, while the Z22 was imagined by Theodor Fromme (call it design lead) with much help from Heinz Zemanek and Rudolf Bodo both designed the Mailüfterl and made the schematics for the Z22. The idea was to design the tube based Z22 in a way that it may be transistorized later. Which happend with the Z23. Which is a quite remarkable planing at that time. ... more to follow

    – Raffzahn
    Apr 13 at 13:52







  • 2





    @Wilson you asked for it: how-were Zuse Z22 Instructions Encoded? ... wasted another perfect good day - even included some German for you to test your knowledge :))

    – Raffzahn
    Apr 14 at 0:10







1




1





The Transmeta CPUs were a somewhat recent example of CPUs that used a (proprietary) VLIW instruction set internally, and another completely different one (namely x86) externally. In the Itanium, the VLIW bundles have explicit parallelism semantics (Intel calls this Explicit Parallel Instruction Computing (EPIC)), i.e. one VLIW bundle is 2 64 bit words with 3 41 bit instructions and a 5 bit "template" that tells the CPU what kinds of instructions the three instructions are and what the data dependencies are.

– Jörg W Mittag
Apr 13 at 7:06





The Transmeta CPUs were a somewhat recent example of CPUs that used a (proprietary) VLIW instruction set internally, and another completely different one (namely x86) externally. In the Itanium, the VLIW bundles have explicit parallelism semantics (Intel calls this Explicit Parallel Instruction Computing (EPIC)), i.e. one VLIW bundle is 2 64 bit words with 3 41 bit instructions and a 5 bit "template" that tells the CPU what kinds of instructions the three instructions are and what the data dependencies are.

– Jörg W Mittag
Apr 13 at 7:06













Can you share (by way of a link to the reference or by adding to you answer) an example of how the Z22 instruction format allowed more operations to be specified in a single word? It must be completely unlike the earlier Z4 if that's the case.

– Wilson
Apr 13 at 12:03





Can you share (by way of a link to the reference or by adding to you answer) an example of how the Z22 instruction format allowed more operations to be specified in a single word? It must be completely unlike the earlier Z4 if that's the case.

– Wilson
Apr 13 at 12:03




1




1





I tried to google it but my German is really quite bad by now.

– Wilson
Apr 13 at 12:04





I tried to google it but my German is really quite bad by now.

– Wilson
Apr 13 at 12:04













@Wilson Ouch. Ok, to start with, Z4 is an original Zuse (the man himself) design, while the Z22 was imagined by Theodor Fromme (call it design lead) with much help from Heinz Zemanek and Rudolf Bodo both designed the Mailüfterl and made the schematics for the Z22. The idea was to design the tube based Z22 in a way that it may be transistorized later. Which happend with the Z23. Which is a quite remarkable planing at that time. ... more to follow

– Raffzahn
Apr 13 at 13:52






@Wilson Ouch. Ok, to start with, Z4 is an original Zuse (the man himself) design, while the Z22 was imagined by Theodor Fromme (call it design lead) with much help from Heinz Zemanek and Rudolf Bodo both designed the Mailüfterl and made the schematics for the Z22. The idea was to design the tube based Z22 in a way that it may be transistorized later. Which happend with the Z23. Which is a quite remarkable planing at that time. ... more to follow

– Raffzahn
Apr 13 at 13:52





2




2





@Wilson you asked for it: how-were Zuse Z22 Instructions Encoded? ... wasted another perfect good day - even included some German for you to test your knowledge :))

– Raffzahn
Apr 14 at 0:10





@Wilson you asked for it: how-were Zuse Z22 Instructions Encoded? ... wasted another perfect good day - even included some German for you to test your knowledge :))

– Raffzahn
Apr 14 at 0:10











14














The PDP-7 was a one address machine. All instructions occupied 18 bits. The operations that manipulated the accumulator didn't reference memory, and therefore didn't need an address. But the address bits were in the instruction anyway, because all instructions were encoded in an 18 bit word. So why not use these unused bits to get more use out of the instruction bits?



Once you get to opcodes with a variable number of operand addresses, the need to economize in this way goes away.






share|improve this answer


















  • 2





    To add to this, the PDP-7 is from an era when it was common for the width of the address bus to be less than the width of the data bus. In this case, you could fit a full 13-bit address into an 18-bit instruction word, which meant that you could pack an entire instruction (including the operand address) into a single word. Compare this to a CPU like the 6502 with 8-bit words and 16-bit addresses: if you can't fit an address into an instruction word then naturally they must come in extra bytes that follow the opcode byte. (continued)

    – Ken Gober
    Apr 13 at 14:56






  • 2





    The flip side of being able to fit the address into the instruction word was that you wasted a lot of bits for instructions that did not need an operand address or jump address. So the PDP-7 style sub-instructions were essentially a way to use unused bits in the instruction word to encode additional instructions, allowing many more instructions to be added without the cost of widening the word size, the only caveat being that the extra instructions had to be ones that didn't need to include an address.

    – Ken Gober
    Apr 13 at 15:00






  • 2





    @KenGober, I think you and I are saying the same thing, in different words. Thanks for adding a little clarity.

    – Walter Mitty
    Apr 13 at 19:36















14














The PDP-7 was a one address machine. All instructions occupied 18 bits. The operations that manipulated the accumulator didn't reference memory, and therefore didn't need an address. But the address bits were in the instruction anyway, because all instructions were encoded in an 18 bit word. So why not use these unused bits to get more use out of the instruction bits?



Once you get to opcodes with a variable number of operand addresses, the need to economize in this way goes away.






share|improve this answer


















  • 2





    To add to this, the PDP-7 is from an era when it was common for the width of the address bus to be less than the width of the data bus. In this case, you could fit a full 13-bit address into an 18-bit instruction word, which meant that you could pack an entire instruction (including the operand address) into a single word. Compare this to a CPU like the 6502 with 8-bit words and 16-bit addresses: if you can't fit an address into an instruction word then naturally they must come in extra bytes that follow the opcode byte. (continued)

    – Ken Gober
    Apr 13 at 14:56






  • 2





    The flip side of being able to fit the address into the instruction word was that you wasted a lot of bits for instructions that did not need an operand address or jump address. So the PDP-7 style sub-instructions were essentially a way to use unused bits in the instruction word to encode additional instructions, allowing many more instructions to be added without the cost of widening the word size, the only caveat being that the extra instructions had to be ones that didn't need to include an address.

    – Ken Gober
    Apr 13 at 15:00






  • 2





    @KenGober, I think you and I are saying the same thing, in different words. Thanks for adding a little clarity.

    – Walter Mitty
    Apr 13 at 19:36













14












14








14







The PDP-7 was a one address machine. All instructions occupied 18 bits. The operations that manipulated the accumulator didn't reference memory, and therefore didn't need an address. But the address bits were in the instruction anyway, because all instructions were encoded in an 18 bit word. So why not use these unused bits to get more use out of the instruction bits?



Once you get to opcodes with a variable number of operand addresses, the need to economize in this way goes away.






share|improve this answer













The PDP-7 was a one address machine. All instructions occupied 18 bits. The operations that manipulated the accumulator didn't reference memory, and therefore didn't need an address. But the address bits were in the instruction anyway, because all instructions were encoded in an 18 bit word. So why not use these unused bits to get more use out of the instruction bits?



Once you get to opcodes with a variable number of operand addresses, the need to economize in this way goes away.







share|improve this answer












share|improve this answer



share|improve this answer










answered Apr 12 at 14:41









Walter MittyWalter Mitty

822313




822313







  • 2





    To add to this, the PDP-7 is from an era when it was common for the width of the address bus to be less than the width of the data bus. In this case, you could fit a full 13-bit address into an 18-bit instruction word, which meant that you could pack an entire instruction (including the operand address) into a single word. Compare this to a CPU like the 6502 with 8-bit words and 16-bit addresses: if you can't fit an address into an instruction word then naturally they must come in extra bytes that follow the opcode byte. (continued)

    – Ken Gober
    Apr 13 at 14:56






  • 2





    The flip side of being able to fit the address into the instruction word was that you wasted a lot of bits for instructions that did not need an operand address or jump address. So the PDP-7 style sub-instructions were essentially a way to use unused bits in the instruction word to encode additional instructions, allowing many more instructions to be added without the cost of widening the word size, the only caveat being that the extra instructions had to be ones that didn't need to include an address.

    – Ken Gober
    Apr 13 at 15:00






  • 2





    @KenGober, I think you and I are saying the same thing, in different words. Thanks for adding a little clarity.

    – Walter Mitty
    Apr 13 at 19:36












  • 2





    To add to this, the PDP-7 is from an era when it was common for the width of the address bus to be less than the width of the data bus. In this case, you could fit a full 13-bit address into an 18-bit instruction word, which meant that you could pack an entire instruction (including the operand address) into a single word. Compare this to a CPU like the 6502 with 8-bit words and 16-bit addresses: if you can't fit an address into an instruction word then naturally they must come in extra bytes that follow the opcode byte. (continued)

    – Ken Gober
    Apr 13 at 14:56






  • 2





    The flip side of being able to fit the address into the instruction word was that you wasted a lot of bits for instructions that did not need an operand address or jump address. So the PDP-7 style sub-instructions were essentially a way to use unused bits in the instruction word to encode additional instructions, allowing many more instructions to be added without the cost of widening the word size, the only caveat being that the extra instructions had to be ones that didn't need to include an address.

    – Ken Gober
    Apr 13 at 15:00






  • 2





    @KenGober, I think you and I are saying the same thing, in different words. Thanks for adding a little clarity.

    – Walter Mitty
    Apr 13 at 19:36







2




2





To add to this, the PDP-7 is from an era when it was common for the width of the address bus to be less than the width of the data bus. In this case, you could fit a full 13-bit address into an 18-bit instruction word, which meant that you could pack an entire instruction (including the operand address) into a single word. Compare this to a CPU like the 6502 with 8-bit words and 16-bit addresses: if you can't fit an address into an instruction word then naturally they must come in extra bytes that follow the opcode byte. (continued)

– Ken Gober
Apr 13 at 14:56





To add to this, the PDP-7 is from an era when it was common for the width of the address bus to be less than the width of the data bus. In this case, you could fit a full 13-bit address into an 18-bit instruction word, which meant that you could pack an entire instruction (including the operand address) into a single word. Compare this to a CPU like the 6502 with 8-bit words and 16-bit addresses: if you can't fit an address into an instruction word then naturally they must come in extra bytes that follow the opcode byte. (continued)

– Ken Gober
Apr 13 at 14:56




2




2





The flip side of being able to fit the address into the instruction word was that you wasted a lot of bits for instructions that did not need an operand address or jump address. So the PDP-7 style sub-instructions were essentially a way to use unused bits in the instruction word to encode additional instructions, allowing many more instructions to be added without the cost of widening the word size, the only caveat being that the extra instructions had to be ones that didn't need to include an address.

– Ken Gober
Apr 13 at 15:00





The flip side of being able to fit the address into the instruction word was that you wasted a lot of bits for instructions that did not need an operand address or jump address. So the PDP-7 style sub-instructions were essentially a way to use unused bits in the instruction word to encode additional instructions, allowing many more instructions to be added without the cost of widening the word size, the only caveat being that the extra instructions had to be ones that didn't need to include an address.

– Ken Gober
Apr 13 at 15:00




2




2





@KenGober, I think you and I are saying the same thing, in different words. Thanks for adding a little clarity.

– Walter Mitty
Apr 13 at 19:36





@KenGober, I think you and I are saying the same thing, in different words. Thanks for adding a little clarity.

– Walter Mitty
Apr 13 at 19:36

















draft saved

draft discarded
















































Thanks for contributing an answer to Retrocomputing Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fretrocomputing.stackexchange.com%2fquestions%2f9666%2fwhy-are-pdp-7-style-microprogrammed-instructions-out-of-vogue%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Sum ergo cogito? 1 nng

419 nièngy_Soadمي 19bal1.5o_g

Queiggey Chernihivv 9NnOo i Zw X QqKk LpB