What does the “x” in “x86” represent? Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 23, 2019 at 23:30 UTC (7:30pm US/Eastern)How does the LOADALL instruction on the 80286 work?What we commonly call PCs are in fact ATs, correct?The start of x86: Intel 8080 vs Intel 8086?x86 as a Pascal Machine?How do you put a 286 in Protected Mode?Why does Oracle use MINUS instead of EXCEPT?How to use the “darker” CGA palette using x86 Assembly?Does anyone have an x86 EGA draw pixel routine?Examples of operating systems using hardware task switching of x86 CPUsCan an x86 CPU running in real mode be considered to be basically an 8086 CPU?What was the last x86 CPU that did not have the x87 floating-point unit built in?

The Nth Gryphon Number

Sum letters are not two different

How to report t statistic from R

Do wooden building fires get hotter than 600°C?

What is best way to wire a ceiling receptacle in this situation?

Would it be easier to apply for a UK visa if there is a host family to sponsor for you in going there?

AppleTVs create a chatty alternate WiFi network

If Windows 7 doesn't support WSL, then what is "Subsystem for UNIX-based Applications"?

One-one communication

Has negative voting ever been officially implemented in elections, or seriously proposed, or even studied?

What does 丫 mean? 丫是什么意思?

Is there public access to the Meteor Crater in Arizona?

Is multiple magic items in one inherently imbalanced?

What would you call this weird metallic apparatus that allows you to lift people?

Is there any word for a place full of confusion?

Drawing spherical mirrors

Crossing US/Canada Border for less than 24 hours

How to compare two different files line by line in unix?

Can the Flaming Sphere spell be rammed into multiple Tiny creatures that are in the same 5-foot square?

What makes a man succeed?

What do you call the main part of a joke?

Amount of permutations on an NxNxN Rubik's Cube

How were pictures turned from film to a big picture in a picture frame before digital scanning?

Do I really need to have a message in a novel to appeal to readers?



What does the “x” in “x86” represent?



Announcing the arrival of Valued Associate #679: Cesar Manara
Planned maintenance scheduled April 23, 2019 at 23:30 UTC (7:30pm US/Eastern)How does the LOADALL instruction on the 80286 work?What we commonly call PCs are in fact ATs, correct?The start of x86: Intel 8080 vs Intel 8086?x86 as a Pascal Machine?How do you put a 286 in Protected Mode?Why does Oracle use MINUS instead of EXCEPT?How to use the “darker” CGA palette using x86 Assembly?Does anyone have an x86 EGA draw pixel routine?Examples of operating systems using hardware task switching of x86 CPUsCan an x86 CPU running in real mode be considered to be basically an 8086 CPU?What was the last x86 CPU that did not have the x87 floating-point unit built in?










19















I have read the following in the x86 Wikipedia page:




The term "x86" came into being because the names of several successors to Intel's 8086 processor end in "86", including the 80186, 80286, 80386 and 80486 processors.




But what does the "x" in "x86" represent? Is it a "variable" that can be something like "801" or "802" or "803" or "804"?










share|improve this question









New contributor




user12302 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.















  • 12





    80 _ 86 (nothing in between), 80 1 86, 80 2 86, 80 3 86, 80 4 86...notice the pattern?

    – user17915
    Apr 16 at 0:15







  • 4





    x in IC part numbering is a common way to declare variable ID for the same IC family. Its meaning can be anything in CPU its usually a generation of processor in MCU it might indicate RAM or EEPROM size, for voltage regulators its target voltage etc ... For TTL logic XXYY like 7474 etc the XX means quality (from commercial to military) etc ... so to be sure see datasheet of the part ... To get back to your question Intel CPU/MCU start using a shortcuted marking like x86, x51 its really an shortcut for 8086... and 8051 ... and it sort of stuck with the community too

    – Spektre
    Apr 16 at 6:37












  • @bogl heh I did not consider that comment an answer rather some additional info I did not see in the other answers ... and was reluctant to create answer on my own as there are already good answers present ... Should I move into answer?

    – Spektre
    Apr 16 at 8:10






  • 1





    Up to you, I have no say here. ;) But to me, it looks very much like an answer.

    – bogl
    Apr 16 at 8:15






  • 1





    OT in Retrocomputing ... ;-)

    – Peter A. Schneider
    Apr 16 at 14:37
















19















I have read the following in the x86 Wikipedia page:




The term "x86" came into being because the names of several successors to Intel's 8086 processor end in "86", including the 80186, 80286, 80386 and 80486 processors.




But what does the "x" in "x86" represent? Is it a "variable" that can be something like "801" or "802" or "803" or "804"?










share|improve this question









New contributor




user12302 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.















  • 12





    80 _ 86 (nothing in between), 80 1 86, 80 2 86, 80 3 86, 80 4 86...notice the pattern?

    – user17915
    Apr 16 at 0:15







  • 4





    x in IC part numbering is a common way to declare variable ID for the same IC family. Its meaning can be anything in CPU its usually a generation of processor in MCU it might indicate RAM or EEPROM size, for voltage regulators its target voltage etc ... For TTL logic XXYY like 7474 etc the XX means quality (from commercial to military) etc ... so to be sure see datasheet of the part ... To get back to your question Intel CPU/MCU start using a shortcuted marking like x86, x51 its really an shortcut for 8086... and 8051 ... and it sort of stuck with the community too

    – Spektre
    Apr 16 at 6:37












  • @bogl heh I did not consider that comment an answer rather some additional info I did not see in the other answers ... and was reluctant to create answer on my own as there are already good answers present ... Should I move into answer?

    – Spektre
    Apr 16 at 8:10






  • 1





    Up to you, I have no say here. ;) But to me, it looks very much like an answer.

    – bogl
    Apr 16 at 8:15






  • 1





    OT in Retrocomputing ... ;-)

    – Peter A. Schneider
    Apr 16 at 14:37














19












19








19


4






I have read the following in the x86 Wikipedia page:




The term "x86" came into being because the names of several successors to Intel's 8086 processor end in "86", including the 80186, 80286, 80386 and 80486 processors.




But what does the "x" in "x86" represent? Is it a "variable" that can be something like "801" or "802" or "803" or "804"?










share|improve this question









New contributor




user12302 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.












I have read the following in the x86 Wikipedia page:




The term "x86" came into being because the names of several successors to Intel's 8086 processor end in "86", including the 80186, 80286, 80386 and 80486 processors.




But what does the "x" in "x86" represent? Is it a "variable" that can be something like "801" or "802" or "803" or "804"?







cpu x86 terminology






share|improve this question









New contributor




user12302 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











share|improve this question









New contributor




user12302 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









share|improve this question




share|improve this question








edited Apr 16 at 10:07









Joel Reyes Noche

22510




22510






New contributor




user12302 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









asked Apr 15 at 16:44









user12302user12302

99113




99113




New contributor




user12302 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.





New contributor





user12302 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






user12302 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







  • 12





    80 _ 86 (nothing in between), 80 1 86, 80 2 86, 80 3 86, 80 4 86...notice the pattern?

    – user17915
    Apr 16 at 0:15







  • 4





    x in IC part numbering is a common way to declare variable ID for the same IC family. Its meaning can be anything in CPU its usually a generation of processor in MCU it might indicate RAM or EEPROM size, for voltage regulators its target voltage etc ... For TTL logic XXYY like 7474 etc the XX means quality (from commercial to military) etc ... so to be sure see datasheet of the part ... To get back to your question Intel CPU/MCU start using a shortcuted marking like x86, x51 its really an shortcut for 8086... and 8051 ... and it sort of stuck with the community too

    – Spektre
    Apr 16 at 6:37












  • @bogl heh I did not consider that comment an answer rather some additional info I did not see in the other answers ... and was reluctant to create answer on my own as there are already good answers present ... Should I move into answer?

    – Spektre
    Apr 16 at 8:10






  • 1





    Up to you, I have no say here. ;) But to me, it looks very much like an answer.

    – bogl
    Apr 16 at 8:15






  • 1





    OT in Retrocomputing ... ;-)

    – Peter A. Schneider
    Apr 16 at 14:37













  • 12





    80 _ 86 (nothing in between), 80 1 86, 80 2 86, 80 3 86, 80 4 86...notice the pattern?

    – user17915
    Apr 16 at 0:15







  • 4





    x in IC part numbering is a common way to declare variable ID for the same IC family. Its meaning can be anything in CPU its usually a generation of processor in MCU it might indicate RAM or EEPROM size, for voltage regulators its target voltage etc ... For TTL logic XXYY like 7474 etc the XX means quality (from commercial to military) etc ... so to be sure see datasheet of the part ... To get back to your question Intel CPU/MCU start using a shortcuted marking like x86, x51 its really an shortcut for 8086... and 8051 ... and it sort of stuck with the community too

    – Spektre
    Apr 16 at 6:37












  • @bogl heh I did not consider that comment an answer rather some additional info I did not see in the other answers ... and was reluctant to create answer on my own as there are already good answers present ... Should I move into answer?

    – Spektre
    Apr 16 at 8:10






  • 1





    Up to you, I have no say here. ;) But to me, it looks very much like an answer.

    – bogl
    Apr 16 at 8:15






  • 1





    OT in Retrocomputing ... ;-)

    – Peter A. Schneider
    Apr 16 at 14:37








12




12





80 _ 86 (nothing in between), 80 1 86, 80 2 86, 80 3 86, 80 4 86...notice the pattern?

– user17915
Apr 16 at 0:15






80 _ 86 (nothing in between), 80 1 86, 80 2 86, 80 3 86, 80 4 86...notice the pattern?

– user17915
Apr 16 at 0:15





4




4





x in IC part numbering is a common way to declare variable ID for the same IC family. Its meaning can be anything in CPU its usually a generation of processor in MCU it might indicate RAM or EEPROM size, for voltage regulators its target voltage etc ... For TTL logic XXYY like 7474 etc the XX means quality (from commercial to military) etc ... so to be sure see datasheet of the part ... To get back to your question Intel CPU/MCU start using a shortcuted marking like x86, x51 its really an shortcut for 8086... and 8051 ... and it sort of stuck with the community too

– Spektre
Apr 16 at 6:37






x in IC part numbering is a common way to declare variable ID for the same IC family. Its meaning can be anything in CPU its usually a generation of processor in MCU it might indicate RAM or EEPROM size, for voltage regulators its target voltage etc ... For TTL logic XXYY like 7474 etc the XX means quality (from commercial to military) etc ... so to be sure see datasheet of the part ... To get back to your question Intel CPU/MCU start using a shortcuted marking like x86, x51 its really an shortcut for 8086... and 8051 ... and it sort of stuck with the community too

– Spektre
Apr 16 at 6:37














@bogl heh I did not consider that comment an answer rather some additional info I did not see in the other answers ... and was reluctant to create answer on my own as there are already good answers present ... Should I move into answer?

– Spektre
Apr 16 at 8:10





@bogl heh I did not consider that comment an answer rather some additional info I did not see in the other answers ... and was reluctant to create answer on my own as there are already good answers present ... Should I move into answer?

– Spektre
Apr 16 at 8:10




1




1





Up to you, I have no say here. ;) But to me, it looks very much like an answer.

– bogl
Apr 16 at 8:15





Up to you, I have no say here. ;) But to me, it looks very much like an answer.

– bogl
Apr 16 at 8:15




1




1





OT in Retrocomputing ... ;-)

– Peter A. Schneider
Apr 16 at 14:37






OT in Retrocomputing ... ;-)

– Peter A. Schneider
Apr 16 at 14:37











7 Answers
7






active

oldest

votes


















43














The term x86 is shorthand for 80x86, which was used to refer to any member of the family 8086 (and also, incidently, 8088), 80186, 80286, etc. Things have since gotten a bit muddled by the fact that while an 80386 had a mode that was compatible with the old architecture, it also introduced some fundamentally new ways of doing things which were shared by the 80486 as well as "named" processors like the Pentium, Pentium Pro, etc., and thus it is sometimes ambiguous whether the name "x86" is used in reference to the architecture that started with the 8086, or the one which had its debut with the 80386.






share|improve this answer




















  • 2





    @BrianH: Perhaps "mode" wasn't the best term. Maybe "ways of doing things" is better, though some of those new ways of doing things also included new 32-bit modes. Perhaps the most important point is that compilers targeting code for the 80386 and later processors will tend to do things fundamentally differently from those targeting the 80286 and earlier processors, so they really should be viewed as distinct architectures.

    – supercat
    Apr 15 at 20:30






  • 13





    @BrianH, 32-bit protected mode with paging and all that is pretty much fundamentally different from the 16-bit protected mode in the 286.

    – ilkkachu
    Apr 15 at 22:16






  • 1





    also, from memory the 286 had to reset to come out of protected mode, while the 386 could change modes at will, so protected mode wasn't widely used until the 386 came along.

    – Joseph Rogers
    Apr 16 at 14:15






  • 2





    @JosephRogers: Protected mode was the only way to access more than 1MB of address space, so using storage beyond that region within a DOS program would require switching to protected mode, doing the access, setting a special "the reset handler should reload some registers and resume normal operation" flag, and then asking the keyboard chip to trigger a CPU reset. There was actually an undocumented way code could access upper memory without that rigamarole, but that wasn't discovered until the 80386 came along. It's really a shame that the designers of protected mode failed to recognize...

    – supercat
    Apr 16 at 15:15






  • 1





    ...what had been uniquely good about the way real mode segments worked. Had the 80386 protected mode incorporated the better aspects of 8086 segmentation, it would have been practical for a framework like .NET to allow programs to access many gigs of storage using 32-bit object references (which would only take half as much cache space as 64-bit ones). Make segment identifiers 32 bits, with the upper portion selecting a descriptor that contains a base and a scale factor, and the lower containing a scaled offset. That would allow every object to start at address 0 of some segment...

    – supercat
    Apr 16 at 15:19


















37














x is meant as wildcard, so this represents all CPUs able to run 8086 compatible code.






share|improve this answer


















  • 7





    This answer is so far the only answer that addresses the original question about what the "x" represents.

    – G. Tranter
    Apr 15 at 21:32






  • 5





    @G.Tranter I agree up to a point. However, if you write "x86" people usually assume you mean compatible with the Intel 80386. i.e. capable of running in 32 bit protected mode. For example, if you compile a C program with gcc -march=x86 the code won't run on an 8086.

    – JeremyP
    Apr 16 at 8:47






  • 4





    @JeremyP Exactly my thoughts. Even in the nineties, I don't remember anybody using x86 to mean 80286 or earlier. The gap was too large to put 16-bit systems in the same bag as 32-bit. When x86 appeared as a term I think everyone meant "80386+" by it.

    – kubanczyk
    Apr 16 at 9:34







  • 2





    Read the question. I find it's always best to answer the question, not what I think is the question behind the question. The "x" is a wildcard. If you read the whole question, you'd see that the author already understands the architecture part.

    – G. Tranter
    Apr 16 at 13:41







  • 1





    @JeremyP: gcc doesn't have -march=x86. It has -march=i386. See godbolt.org/z/xg19XI shows gcc -m32's help for invalid -march=... values, which lists all it supports. If you run x86 gcc with the default -m64, it leaves out arches that only support 32-bit mode. gcc -m16 exists, but still requires 386+ because it mostly just assembles its usual machine code with .code16gcc so instructions with explicit operands get an operand-size and address-size prefix. Anyway, the critical point is that gcc -march never pretended to set the target mode, just ISA extensions within it.

    – Peter Cordes
    Apr 17 at 1:00



















9














In modern usage it also means software which only uses the 32-bit architecture of the earlier 80x86 processors, to distinguish it from 64-bit applications.



Microsoft uses it that way on 64-bit versions of Windows, which have two separate directories called "Program Files" and "Program Files (x86)."



The 32-bit applications will run on 64-bit hardware, but the OS needs to provide the appropriate 32 or 64 bit interface at run-time.






share|improve this answer


















  • 5





    That doesn't mean software though, it means the hardware the software is built for. Consider a rack of fan belts labelled "Ford Focus", "Nissan Micra", etc.. You're not saying the fan belt is a Nissan Micra, only that it's suitable for use on one.

    – Graham
    Apr 16 at 7:17











  • Only MS Windows uses x86 to specifically exclude x86-64. In other contexts, like computer architecture discussion, x86 includes all CPUs that are backwards-compatible with 8086, with the usual assumption that modern CPUs are running in their best mode (x86-64 long mode). Or at least no implication of 32-bit mode specifically. e.g. "x86 has efficient unaligned loads, but ARM or MIPS doesn't always". But I'd certainly say "modern x86 has 16 integer and 16 or 32 vector registers". (Which is only true in long mode, and I'm of course talking about architectural registers, not physical.)

    – Peter Cordes
    Apr 17 at 1:05












  • TL:DR: In other contexts (outside of MS Windows software), x86-64 is a subset of x86, not a disjoint set.

    – Peter Cordes
    Apr 17 at 1:08











  • A term that unambiguously means 32-bit x86 is "IA-32". en.wikipedia.org/wiki/IA-32. Intel uses that in their ISA manuals. (For a while, they used IA-32e (enhanced) for x86-64-specific features / modes. I forget if they still do that or if they call it x86-64.)

    – Peter Cordes
    Apr 17 at 1:29


















6














Intel products were numbered. For example, their first microprocessor was the 4-bit Intel 4004, which was coupled with the 4001 ROM, 4002 RAM, and 4003 shift register. The start denoted the series, and the last digit denoted the specific part.



Later, the intel 8008 came along, which was an 8-bit microprocessor. This was succeeded by the 8080, which was then replaced by the 8085, which was then replaced by the 8086.



After the 8086, processors started taking on the format of 80x86, with x being a number such as 80186, 80286, 80386, etc. They were backwards compatible with one-another, and modern computers still boot into 16-bit-mode. As Intel continued rolling out processors, they began to be referred to as Intel 386 or Intel 486 rather than Intel 80386. This is how the terms 'i386' and 'i586' came into play. As they were based on the same architecture, they were called Intel x86, where x refers to a number. They also came with coprocessors that had a last number of '7', such as 80387, and as such we also have x87.






share|improve this answer










New contributor




Ender - Joshua Pritsker is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.















  • 1





    You have a typo: 8186->80186. Too small for me to edit myself.

    – Martin Bonner
    Apr 16 at 10:20











  • @MartinBonner Thanks, fixed.

    – Ender - Joshua Pritsker
    Apr 16 at 21:27











  • Why does "i586" refer to Pentium 1, and why does "i686" refer to Pentium Pro? - i586 is a made-up term. The CPU model numbers were 80500 through 80502 for the P5 / P54C / P55C microarchitectures. But yeah there was a 5 in there, so i586 is semi justified for convenience and consistency.

    – Peter Cordes
    Apr 17 at 1:19


















4














It just means any processor compatible with same architecture.
So it includes 8088, 8086, 80186, 80286, 80386, 80486, Pentium, etc..






share|improve this answer






























    3














    The name "x86" was never 'given' or 'desiged' this way. If I remember correctly, it more or less evolved as a convenient abbriviation for a whole range of compatible processors.



    Back in the day when PC's became populair, it was important that your PC was "IBM Compatible". This meant, among other things, your PC must have an Intel 8086 or an 8088. Later, when Intel released more powerfull processors such as the (rare) 80186 or (popular) 80286, it was still important that your PC was just "MS-Dos" or "IBM Compatible". The 80286 was just a faster processor. It had a protected mode feature, but little software actually used or even required that.



    The next step was the 80386. This was an improvement over the 80286 because it had a mode that provided full backward compatibility with 8086 programs. Operating systems such as OS/2, DesqView and MS-Windows used this mode to provide backward compatibility whith existing software. Other operating systems such as Linux and *BSD's designed for PC hardware also depended on some new features of the 80386 without acutally providing direct compatibilitiy with existing MS-DOS software. All these systems required a 80386 processor.



    Then came the 80486. An even faster and more powerfull processor but mainly backward compatible with the '386. So if you bought a '486 you could still run software designed for the '386. The package would say 'needs a 386 or better' or 'needs 386 or 486'



    Along came the 80586 or Pentium. And then the Pentium Pro, also known as 80686...



    By this time software developers got tired of listing all possible numbers and since most software was still written to be able to run on a '386, the whole list of numbers was abbriviated to just "x86". This later became synonymous with "32 bit", because the 80386 was a 32 bit processor and hence software that's written for 'x86' is 32-bit software.






    share|improve this answer








    New contributor




    Oscar is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.



























      1














      Practically x86 is shortening for "80386 or 80486 running in 32-bit mode". It comes from 8086/186/286+ line but Win32 cannot run on CPU below 386. After 80486 the 80*86 scheme was changed to Pentium[N] and AMD [model].






      share|improve this answer








      New contributor




      i486 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.




















      • Why does "i586" refer to Pentium 1, and why does "i686" refer to Pentium Pro? explains that in casual usage, i586 and i686 were used for somewhat justifiable reasons. x86 definitely does not exclude modern CPUs like Skylake! In most contexts other than MS Windows (e.g. CPU architecture discussion) it also doesn't mean specifically 32-bit mode.

        – Peter Cordes
        Apr 17 at 1:12











      Your Answer








      StackExchange.ready(function()
      var channelOptions =
      tags: "".split(" "),
      id: "648"
      ;
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function()
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled)
      StackExchange.using("snippets", function()
      createEditor();
      );

      else
      createEditor();

      );

      function createEditor()
      StackExchange.prepareEditor(
      heartbeatType: 'answer',
      autoActivateHeartbeat: false,
      convertImagesToLinks: false,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: null,
      bindNavPrevention: true,
      postfix: "",
      imageUploader:
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      ,
      noCode: true, onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      );



      );






      user12302 is a new contributor. Be nice, and check out our Code of Conduct.









      draft saved

      draft discarded


















      StackExchange.ready(
      function ()
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fretrocomputing.stackexchange.com%2fquestions%2f9685%2fwhat-does-the-x-in-x86-represent%23new-answer', 'question_page');

      );

      Post as a guest















      Required, but never shown

























      7 Answers
      7






      active

      oldest

      votes








      7 Answers
      7






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      43














      The term x86 is shorthand for 80x86, which was used to refer to any member of the family 8086 (and also, incidently, 8088), 80186, 80286, etc. Things have since gotten a bit muddled by the fact that while an 80386 had a mode that was compatible with the old architecture, it also introduced some fundamentally new ways of doing things which were shared by the 80486 as well as "named" processors like the Pentium, Pentium Pro, etc., and thus it is sometimes ambiguous whether the name "x86" is used in reference to the architecture that started with the 8086, or the one which had its debut with the 80386.






      share|improve this answer




















      • 2





        @BrianH: Perhaps "mode" wasn't the best term. Maybe "ways of doing things" is better, though some of those new ways of doing things also included new 32-bit modes. Perhaps the most important point is that compilers targeting code for the 80386 and later processors will tend to do things fundamentally differently from those targeting the 80286 and earlier processors, so they really should be viewed as distinct architectures.

        – supercat
        Apr 15 at 20:30






      • 13





        @BrianH, 32-bit protected mode with paging and all that is pretty much fundamentally different from the 16-bit protected mode in the 286.

        – ilkkachu
        Apr 15 at 22:16






      • 1





        also, from memory the 286 had to reset to come out of protected mode, while the 386 could change modes at will, so protected mode wasn't widely used until the 386 came along.

        – Joseph Rogers
        Apr 16 at 14:15






      • 2





        @JosephRogers: Protected mode was the only way to access more than 1MB of address space, so using storage beyond that region within a DOS program would require switching to protected mode, doing the access, setting a special "the reset handler should reload some registers and resume normal operation" flag, and then asking the keyboard chip to trigger a CPU reset. There was actually an undocumented way code could access upper memory without that rigamarole, but that wasn't discovered until the 80386 came along. It's really a shame that the designers of protected mode failed to recognize...

        – supercat
        Apr 16 at 15:15






      • 1





        ...what had been uniquely good about the way real mode segments worked. Had the 80386 protected mode incorporated the better aspects of 8086 segmentation, it would have been practical for a framework like .NET to allow programs to access many gigs of storage using 32-bit object references (which would only take half as much cache space as 64-bit ones). Make segment identifiers 32 bits, with the upper portion selecting a descriptor that contains a base and a scale factor, and the lower containing a scaled offset. That would allow every object to start at address 0 of some segment...

        – supercat
        Apr 16 at 15:19















      43














      The term x86 is shorthand for 80x86, which was used to refer to any member of the family 8086 (and also, incidently, 8088), 80186, 80286, etc. Things have since gotten a bit muddled by the fact that while an 80386 had a mode that was compatible with the old architecture, it also introduced some fundamentally new ways of doing things which were shared by the 80486 as well as "named" processors like the Pentium, Pentium Pro, etc., and thus it is sometimes ambiguous whether the name "x86" is used in reference to the architecture that started with the 8086, or the one which had its debut with the 80386.






      share|improve this answer




















      • 2





        @BrianH: Perhaps "mode" wasn't the best term. Maybe "ways of doing things" is better, though some of those new ways of doing things also included new 32-bit modes. Perhaps the most important point is that compilers targeting code for the 80386 and later processors will tend to do things fundamentally differently from those targeting the 80286 and earlier processors, so they really should be viewed as distinct architectures.

        – supercat
        Apr 15 at 20:30






      • 13





        @BrianH, 32-bit protected mode with paging and all that is pretty much fundamentally different from the 16-bit protected mode in the 286.

        – ilkkachu
        Apr 15 at 22:16






      • 1





        also, from memory the 286 had to reset to come out of protected mode, while the 386 could change modes at will, so protected mode wasn't widely used until the 386 came along.

        – Joseph Rogers
        Apr 16 at 14:15






      • 2





        @JosephRogers: Protected mode was the only way to access more than 1MB of address space, so using storage beyond that region within a DOS program would require switching to protected mode, doing the access, setting a special "the reset handler should reload some registers and resume normal operation" flag, and then asking the keyboard chip to trigger a CPU reset. There was actually an undocumented way code could access upper memory without that rigamarole, but that wasn't discovered until the 80386 came along. It's really a shame that the designers of protected mode failed to recognize...

        – supercat
        Apr 16 at 15:15






      • 1





        ...what had been uniquely good about the way real mode segments worked. Had the 80386 protected mode incorporated the better aspects of 8086 segmentation, it would have been practical for a framework like .NET to allow programs to access many gigs of storage using 32-bit object references (which would only take half as much cache space as 64-bit ones). Make segment identifiers 32 bits, with the upper portion selecting a descriptor that contains a base and a scale factor, and the lower containing a scaled offset. That would allow every object to start at address 0 of some segment...

        – supercat
        Apr 16 at 15:19













      43












      43








      43







      The term x86 is shorthand for 80x86, which was used to refer to any member of the family 8086 (and also, incidently, 8088), 80186, 80286, etc. Things have since gotten a bit muddled by the fact that while an 80386 had a mode that was compatible with the old architecture, it also introduced some fundamentally new ways of doing things which were shared by the 80486 as well as "named" processors like the Pentium, Pentium Pro, etc., and thus it is sometimes ambiguous whether the name "x86" is used in reference to the architecture that started with the 8086, or the one which had its debut with the 80386.






      share|improve this answer















      The term x86 is shorthand for 80x86, which was used to refer to any member of the family 8086 (and also, incidently, 8088), 80186, 80286, etc. Things have since gotten a bit muddled by the fact that while an 80386 had a mode that was compatible with the old architecture, it also introduced some fundamentally new ways of doing things which were shared by the 80486 as well as "named" processors like the Pentium, Pentium Pro, etc., and thus it is sometimes ambiguous whether the name "x86" is used in reference to the architecture that started with the 8086, or the one which had its debut with the 80386.







      share|improve this answer














      share|improve this answer



      share|improve this answer








      edited Apr 15 at 20:25

























      answered Apr 15 at 17:13









      supercatsupercat

      8,065942




      8,065942







      • 2





        @BrianH: Perhaps "mode" wasn't the best term. Maybe "ways of doing things" is better, though some of those new ways of doing things also included new 32-bit modes. Perhaps the most important point is that compilers targeting code for the 80386 and later processors will tend to do things fundamentally differently from those targeting the 80286 and earlier processors, so they really should be viewed as distinct architectures.

        – supercat
        Apr 15 at 20:30






      • 13





        @BrianH, 32-bit protected mode with paging and all that is pretty much fundamentally different from the 16-bit protected mode in the 286.

        – ilkkachu
        Apr 15 at 22:16






      • 1





        also, from memory the 286 had to reset to come out of protected mode, while the 386 could change modes at will, so protected mode wasn't widely used until the 386 came along.

        – Joseph Rogers
        Apr 16 at 14:15






      • 2





        @JosephRogers: Protected mode was the only way to access more than 1MB of address space, so using storage beyond that region within a DOS program would require switching to protected mode, doing the access, setting a special "the reset handler should reload some registers and resume normal operation" flag, and then asking the keyboard chip to trigger a CPU reset. There was actually an undocumented way code could access upper memory without that rigamarole, but that wasn't discovered until the 80386 came along. It's really a shame that the designers of protected mode failed to recognize...

        – supercat
        Apr 16 at 15:15






      • 1





        ...what had been uniquely good about the way real mode segments worked. Had the 80386 protected mode incorporated the better aspects of 8086 segmentation, it would have been practical for a framework like .NET to allow programs to access many gigs of storage using 32-bit object references (which would only take half as much cache space as 64-bit ones). Make segment identifiers 32 bits, with the upper portion selecting a descriptor that contains a base and a scale factor, and the lower containing a scaled offset. That would allow every object to start at address 0 of some segment...

        – supercat
        Apr 16 at 15:19












      • 2





        @BrianH: Perhaps "mode" wasn't the best term. Maybe "ways of doing things" is better, though some of those new ways of doing things also included new 32-bit modes. Perhaps the most important point is that compilers targeting code for the 80386 and later processors will tend to do things fundamentally differently from those targeting the 80286 and earlier processors, so they really should be viewed as distinct architectures.

        – supercat
        Apr 15 at 20:30






      • 13





        @BrianH, 32-bit protected mode with paging and all that is pretty much fundamentally different from the 16-bit protected mode in the 286.

        – ilkkachu
        Apr 15 at 22:16






      • 1





        also, from memory the 286 had to reset to come out of protected mode, while the 386 could change modes at will, so protected mode wasn't widely used until the 386 came along.

        – Joseph Rogers
        Apr 16 at 14:15






      • 2





        @JosephRogers: Protected mode was the only way to access more than 1MB of address space, so using storage beyond that region within a DOS program would require switching to protected mode, doing the access, setting a special "the reset handler should reload some registers and resume normal operation" flag, and then asking the keyboard chip to trigger a CPU reset. There was actually an undocumented way code could access upper memory without that rigamarole, but that wasn't discovered until the 80386 came along. It's really a shame that the designers of protected mode failed to recognize...

        – supercat
        Apr 16 at 15:15






      • 1





        ...what had been uniquely good about the way real mode segments worked. Had the 80386 protected mode incorporated the better aspects of 8086 segmentation, it would have been practical for a framework like .NET to allow programs to access many gigs of storage using 32-bit object references (which would only take half as much cache space as 64-bit ones). Make segment identifiers 32 bits, with the upper portion selecting a descriptor that contains a base and a scale factor, and the lower containing a scaled offset. That would allow every object to start at address 0 of some segment...

        – supercat
        Apr 16 at 15:19







      2




      2





      @BrianH: Perhaps "mode" wasn't the best term. Maybe "ways of doing things" is better, though some of those new ways of doing things also included new 32-bit modes. Perhaps the most important point is that compilers targeting code for the 80386 and later processors will tend to do things fundamentally differently from those targeting the 80286 and earlier processors, so they really should be viewed as distinct architectures.

      – supercat
      Apr 15 at 20:30





      @BrianH: Perhaps "mode" wasn't the best term. Maybe "ways of doing things" is better, though some of those new ways of doing things also included new 32-bit modes. Perhaps the most important point is that compilers targeting code for the 80386 and later processors will tend to do things fundamentally differently from those targeting the 80286 and earlier processors, so they really should be viewed as distinct architectures.

      – supercat
      Apr 15 at 20:30




      13




      13





      @BrianH, 32-bit protected mode with paging and all that is pretty much fundamentally different from the 16-bit protected mode in the 286.

      – ilkkachu
      Apr 15 at 22:16





      @BrianH, 32-bit protected mode with paging and all that is pretty much fundamentally different from the 16-bit protected mode in the 286.

      – ilkkachu
      Apr 15 at 22:16




      1




      1





      also, from memory the 286 had to reset to come out of protected mode, while the 386 could change modes at will, so protected mode wasn't widely used until the 386 came along.

      – Joseph Rogers
      Apr 16 at 14:15





      also, from memory the 286 had to reset to come out of protected mode, while the 386 could change modes at will, so protected mode wasn't widely used until the 386 came along.

      – Joseph Rogers
      Apr 16 at 14:15




      2




      2





      @JosephRogers: Protected mode was the only way to access more than 1MB of address space, so using storage beyond that region within a DOS program would require switching to protected mode, doing the access, setting a special "the reset handler should reload some registers and resume normal operation" flag, and then asking the keyboard chip to trigger a CPU reset. There was actually an undocumented way code could access upper memory without that rigamarole, but that wasn't discovered until the 80386 came along. It's really a shame that the designers of protected mode failed to recognize...

      – supercat
      Apr 16 at 15:15





      @JosephRogers: Protected mode was the only way to access more than 1MB of address space, so using storage beyond that region within a DOS program would require switching to protected mode, doing the access, setting a special "the reset handler should reload some registers and resume normal operation" flag, and then asking the keyboard chip to trigger a CPU reset. There was actually an undocumented way code could access upper memory without that rigamarole, but that wasn't discovered until the 80386 came along. It's really a shame that the designers of protected mode failed to recognize...

      – supercat
      Apr 16 at 15:15




      1




      1





      ...what had been uniquely good about the way real mode segments worked. Had the 80386 protected mode incorporated the better aspects of 8086 segmentation, it would have been practical for a framework like .NET to allow programs to access many gigs of storage using 32-bit object references (which would only take half as much cache space as 64-bit ones). Make segment identifiers 32 bits, with the upper portion selecting a descriptor that contains a base and a scale factor, and the lower containing a scaled offset. That would allow every object to start at address 0 of some segment...

      – supercat
      Apr 16 at 15:19





      ...what had been uniquely good about the way real mode segments worked. Had the 80386 protected mode incorporated the better aspects of 8086 segmentation, it would have been practical for a framework like .NET to allow programs to access many gigs of storage using 32-bit object references (which would only take half as much cache space as 64-bit ones). Make segment identifiers 32 bits, with the upper portion selecting a descriptor that contains a base and a scale factor, and the lower containing a scaled offset. That would allow every object to start at address 0 of some segment...

      – supercat
      Apr 16 at 15:19











      37














      x is meant as wildcard, so this represents all CPUs able to run 8086 compatible code.






      share|improve this answer


















      • 7





        This answer is so far the only answer that addresses the original question about what the "x" represents.

        – G. Tranter
        Apr 15 at 21:32






      • 5





        @G.Tranter I agree up to a point. However, if you write "x86" people usually assume you mean compatible with the Intel 80386. i.e. capable of running in 32 bit protected mode. For example, if you compile a C program with gcc -march=x86 the code won't run on an 8086.

        – JeremyP
        Apr 16 at 8:47






      • 4





        @JeremyP Exactly my thoughts. Even in the nineties, I don't remember anybody using x86 to mean 80286 or earlier. The gap was too large to put 16-bit systems in the same bag as 32-bit. When x86 appeared as a term I think everyone meant "80386+" by it.

        – kubanczyk
        Apr 16 at 9:34







      • 2





        Read the question. I find it's always best to answer the question, not what I think is the question behind the question. The "x" is a wildcard. If you read the whole question, you'd see that the author already understands the architecture part.

        – G. Tranter
        Apr 16 at 13:41







      • 1





        @JeremyP: gcc doesn't have -march=x86. It has -march=i386. See godbolt.org/z/xg19XI shows gcc -m32's help for invalid -march=... values, which lists all it supports. If you run x86 gcc with the default -m64, it leaves out arches that only support 32-bit mode. gcc -m16 exists, but still requires 386+ because it mostly just assembles its usual machine code with .code16gcc so instructions with explicit operands get an operand-size and address-size prefix. Anyway, the critical point is that gcc -march never pretended to set the target mode, just ISA extensions within it.

        – Peter Cordes
        Apr 17 at 1:00
















      37














      x is meant as wildcard, so this represents all CPUs able to run 8086 compatible code.






      share|improve this answer


















      • 7





        This answer is so far the only answer that addresses the original question about what the "x" represents.

        – G. Tranter
        Apr 15 at 21:32






      • 5





        @G.Tranter I agree up to a point. However, if you write "x86" people usually assume you mean compatible with the Intel 80386. i.e. capable of running in 32 bit protected mode. For example, if you compile a C program with gcc -march=x86 the code won't run on an 8086.

        – JeremyP
        Apr 16 at 8:47






      • 4





        @JeremyP Exactly my thoughts. Even in the nineties, I don't remember anybody using x86 to mean 80286 or earlier. The gap was too large to put 16-bit systems in the same bag as 32-bit. When x86 appeared as a term I think everyone meant "80386+" by it.

        – kubanczyk
        Apr 16 at 9:34







      • 2





        Read the question. I find it's always best to answer the question, not what I think is the question behind the question. The "x" is a wildcard. If you read the whole question, you'd see that the author already understands the architecture part.

        – G. Tranter
        Apr 16 at 13:41







      • 1





        @JeremyP: gcc doesn't have -march=x86. It has -march=i386. See godbolt.org/z/xg19XI shows gcc -m32's help for invalid -march=... values, which lists all it supports. If you run x86 gcc with the default -m64, it leaves out arches that only support 32-bit mode. gcc -m16 exists, but still requires 386+ because it mostly just assembles its usual machine code with .code16gcc so instructions with explicit operands get an operand-size and address-size prefix. Anyway, the critical point is that gcc -march never pretended to set the target mode, just ISA extensions within it.

        – Peter Cordes
        Apr 17 at 1:00














      37












      37








      37







      x is meant as wildcard, so this represents all CPUs able to run 8086 compatible code.






      share|improve this answer













      x is meant as wildcard, so this represents all CPUs able to run 8086 compatible code.







      share|improve this answer












      share|improve this answer



      share|improve this answer










      answered Apr 15 at 16:53









      RaffzahnRaffzahn

      57k6139232




      57k6139232







      • 7





        This answer is so far the only answer that addresses the original question about what the "x" represents.

        – G. Tranter
        Apr 15 at 21:32






      • 5





        @G.Tranter I agree up to a point. However, if you write "x86" people usually assume you mean compatible with the Intel 80386. i.e. capable of running in 32 bit protected mode. For example, if you compile a C program with gcc -march=x86 the code won't run on an 8086.

        – JeremyP
        Apr 16 at 8:47






      • 4





        @JeremyP Exactly my thoughts. Even in the nineties, I don't remember anybody using x86 to mean 80286 or earlier. The gap was too large to put 16-bit systems in the same bag as 32-bit. When x86 appeared as a term I think everyone meant "80386+" by it.

        – kubanczyk
        Apr 16 at 9:34







      • 2





        Read the question. I find it's always best to answer the question, not what I think is the question behind the question. The "x" is a wildcard. If you read the whole question, you'd see that the author already understands the architecture part.

        – G. Tranter
        Apr 16 at 13:41







      • 1





        @JeremyP: gcc doesn't have -march=x86. It has -march=i386. See godbolt.org/z/xg19XI shows gcc -m32's help for invalid -march=... values, which lists all it supports. If you run x86 gcc with the default -m64, it leaves out arches that only support 32-bit mode. gcc -m16 exists, but still requires 386+ because it mostly just assembles its usual machine code with .code16gcc so instructions with explicit operands get an operand-size and address-size prefix. Anyway, the critical point is that gcc -march never pretended to set the target mode, just ISA extensions within it.

        – Peter Cordes
        Apr 17 at 1:00













      • 7





        This answer is so far the only answer that addresses the original question about what the "x" represents.

        – G. Tranter
        Apr 15 at 21:32






      • 5





        @G.Tranter I agree up to a point. However, if you write "x86" people usually assume you mean compatible with the Intel 80386. i.e. capable of running in 32 bit protected mode. For example, if you compile a C program with gcc -march=x86 the code won't run on an 8086.

        – JeremyP
        Apr 16 at 8:47






      • 4





        @JeremyP Exactly my thoughts. Even in the nineties, I don't remember anybody using x86 to mean 80286 or earlier. The gap was too large to put 16-bit systems in the same bag as 32-bit. When x86 appeared as a term I think everyone meant "80386+" by it.

        – kubanczyk
        Apr 16 at 9:34







      • 2





        Read the question. I find it's always best to answer the question, not what I think is the question behind the question. The "x" is a wildcard. If you read the whole question, you'd see that the author already understands the architecture part.

        – G. Tranter
        Apr 16 at 13:41







      • 1





        @JeremyP: gcc doesn't have -march=x86. It has -march=i386. See godbolt.org/z/xg19XI shows gcc -m32's help for invalid -march=... values, which lists all it supports. If you run x86 gcc with the default -m64, it leaves out arches that only support 32-bit mode. gcc -m16 exists, but still requires 386+ because it mostly just assembles its usual machine code with .code16gcc so instructions with explicit operands get an operand-size and address-size prefix. Anyway, the critical point is that gcc -march never pretended to set the target mode, just ISA extensions within it.

        – Peter Cordes
        Apr 17 at 1:00








      7




      7





      This answer is so far the only answer that addresses the original question about what the "x" represents.

      – G. Tranter
      Apr 15 at 21:32





      This answer is so far the only answer that addresses the original question about what the "x" represents.

      – G. Tranter
      Apr 15 at 21:32




      5




      5





      @G.Tranter I agree up to a point. However, if you write "x86" people usually assume you mean compatible with the Intel 80386. i.e. capable of running in 32 bit protected mode. For example, if you compile a C program with gcc -march=x86 the code won't run on an 8086.

      – JeremyP
      Apr 16 at 8:47





      @G.Tranter I agree up to a point. However, if you write "x86" people usually assume you mean compatible with the Intel 80386. i.e. capable of running in 32 bit protected mode. For example, if you compile a C program with gcc -march=x86 the code won't run on an 8086.

      – JeremyP
      Apr 16 at 8:47




      4




      4





      @JeremyP Exactly my thoughts. Even in the nineties, I don't remember anybody using x86 to mean 80286 or earlier. The gap was too large to put 16-bit systems in the same bag as 32-bit. When x86 appeared as a term I think everyone meant "80386+" by it.

      – kubanczyk
      Apr 16 at 9:34






      @JeremyP Exactly my thoughts. Even in the nineties, I don't remember anybody using x86 to mean 80286 or earlier. The gap was too large to put 16-bit systems in the same bag as 32-bit. When x86 appeared as a term I think everyone meant "80386+" by it.

      – kubanczyk
      Apr 16 at 9:34





      2




      2





      Read the question. I find it's always best to answer the question, not what I think is the question behind the question. The "x" is a wildcard. If you read the whole question, you'd see that the author already understands the architecture part.

      – G. Tranter
      Apr 16 at 13:41






      Read the question. I find it's always best to answer the question, not what I think is the question behind the question. The "x" is a wildcard. If you read the whole question, you'd see that the author already understands the architecture part.

      – G. Tranter
      Apr 16 at 13:41





      1




      1





      @JeremyP: gcc doesn't have -march=x86. It has -march=i386. See godbolt.org/z/xg19XI shows gcc -m32's help for invalid -march=... values, which lists all it supports. If you run x86 gcc with the default -m64, it leaves out arches that only support 32-bit mode. gcc -m16 exists, but still requires 386+ because it mostly just assembles its usual machine code with .code16gcc so instructions with explicit operands get an operand-size and address-size prefix. Anyway, the critical point is that gcc -march never pretended to set the target mode, just ISA extensions within it.

      – Peter Cordes
      Apr 17 at 1:00






      @JeremyP: gcc doesn't have -march=x86. It has -march=i386. See godbolt.org/z/xg19XI shows gcc -m32's help for invalid -march=... values, which lists all it supports. If you run x86 gcc with the default -m64, it leaves out arches that only support 32-bit mode. gcc -m16 exists, but still requires 386+ because it mostly just assembles its usual machine code with .code16gcc so instructions with explicit operands get an operand-size and address-size prefix. Anyway, the critical point is that gcc -march never pretended to set the target mode, just ISA extensions within it.

      – Peter Cordes
      Apr 17 at 1:00












      9














      In modern usage it also means software which only uses the 32-bit architecture of the earlier 80x86 processors, to distinguish it from 64-bit applications.



      Microsoft uses it that way on 64-bit versions of Windows, which have two separate directories called "Program Files" and "Program Files (x86)."



      The 32-bit applications will run on 64-bit hardware, but the OS needs to provide the appropriate 32 or 64 bit interface at run-time.






      share|improve this answer


















      • 5





        That doesn't mean software though, it means the hardware the software is built for. Consider a rack of fan belts labelled "Ford Focus", "Nissan Micra", etc.. You're not saying the fan belt is a Nissan Micra, only that it's suitable for use on one.

        – Graham
        Apr 16 at 7:17











      • Only MS Windows uses x86 to specifically exclude x86-64. In other contexts, like computer architecture discussion, x86 includes all CPUs that are backwards-compatible with 8086, with the usual assumption that modern CPUs are running in their best mode (x86-64 long mode). Or at least no implication of 32-bit mode specifically. e.g. "x86 has efficient unaligned loads, but ARM or MIPS doesn't always". But I'd certainly say "modern x86 has 16 integer and 16 or 32 vector registers". (Which is only true in long mode, and I'm of course talking about architectural registers, not physical.)

        – Peter Cordes
        Apr 17 at 1:05












      • TL:DR: In other contexts (outside of MS Windows software), x86-64 is a subset of x86, not a disjoint set.

        – Peter Cordes
        Apr 17 at 1:08











      • A term that unambiguously means 32-bit x86 is "IA-32". en.wikipedia.org/wiki/IA-32. Intel uses that in their ISA manuals. (For a while, they used IA-32e (enhanced) for x86-64-specific features / modes. I forget if they still do that or if they call it x86-64.)

        – Peter Cordes
        Apr 17 at 1:29















      9














      In modern usage it also means software which only uses the 32-bit architecture of the earlier 80x86 processors, to distinguish it from 64-bit applications.



      Microsoft uses it that way on 64-bit versions of Windows, which have two separate directories called "Program Files" and "Program Files (x86)."



      The 32-bit applications will run on 64-bit hardware, but the OS needs to provide the appropriate 32 or 64 bit interface at run-time.






      share|improve this answer


















      • 5





        That doesn't mean software though, it means the hardware the software is built for. Consider a rack of fan belts labelled "Ford Focus", "Nissan Micra", etc.. You're not saying the fan belt is a Nissan Micra, only that it's suitable for use on one.

        – Graham
        Apr 16 at 7:17











      • Only MS Windows uses x86 to specifically exclude x86-64. In other contexts, like computer architecture discussion, x86 includes all CPUs that are backwards-compatible with 8086, with the usual assumption that modern CPUs are running in their best mode (x86-64 long mode). Or at least no implication of 32-bit mode specifically. e.g. "x86 has efficient unaligned loads, but ARM or MIPS doesn't always". But I'd certainly say "modern x86 has 16 integer and 16 or 32 vector registers". (Which is only true in long mode, and I'm of course talking about architectural registers, not physical.)

        – Peter Cordes
        Apr 17 at 1:05












      • TL:DR: In other contexts (outside of MS Windows software), x86-64 is a subset of x86, not a disjoint set.

        – Peter Cordes
        Apr 17 at 1:08











      • A term that unambiguously means 32-bit x86 is "IA-32". en.wikipedia.org/wiki/IA-32. Intel uses that in their ISA manuals. (For a while, they used IA-32e (enhanced) for x86-64-specific features / modes. I forget if they still do that or if they call it x86-64.)

        – Peter Cordes
        Apr 17 at 1:29













      9












      9








      9







      In modern usage it also means software which only uses the 32-bit architecture of the earlier 80x86 processors, to distinguish it from 64-bit applications.



      Microsoft uses it that way on 64-bit versions of Windows, which have two separate directories called "Program Files" and "Program Files (x86)."



      The 32-bit applications will run on 64-bit hardware, but the OS needs to provide the appropriate 32 or 64 bit interface at run-time.






      share|improve this answer













      In modern usage it also means software which only uses the 32-bit architecture of the earlier 80x86 processors, to distinguish it from 64-bit applications.



      Microsoft uses it that way on 64-bit versions of Windows, which have two separate directories called "Program Files" and "Program Files (x86)."



      The 32-bit applications will run on 64-bit hardware, but the OS needs to provide the appropriate 32 or 64 bit interface at run-time.







      share|improve this answer












      share|improve this answer



      share|improve this answer










      answered Apr 15 at 17:36









      alephzeroalephzero

      2,5211816




      2,5211816







      • 5





        That doesn't mean software though, it means the hardware the software is built for. Consider a rack of fan belts labelled "Ford Focus", "Nissan Micra", etc.. You're not saying the fan belt is a Nissan Micra, only that it's suitable for use on one.

        – Graham
        Apr 16 at 7:17











      • Only MS Windows uses x86 to specifically exclude x86-64. In other contexts, like computer architecture discussion, x86 includes all CPUs that are backwards-compatible with 8086, with the usual assumption that modern CPUs are running in their best mode (x86-64 long mode). Or at least no implication of 32-bit mode specifically. e.g. "x86 has efficient unaligned loads, but ARM or MIPS doesn't always". But I'd certainly say "modern x86 has 16 integer and 16 or 32 vector registers". (Which is only true in long mode, and I'm of course talking about architectural registers, not physical.)

        – Peter Cordes
        Apr 17 at 1:05












      • TL:DR: In other contexts (outside of MS Windows software), x86-64 is a subset of x86, not a disjoint set.

        – Peter Cordes
        Apr 17 at 1:08











      • A term that unambiguously means 32-bit x86 is "IA-32". en.wikipedia.org/wiki/IA-32. Intel uses that in their ISA manuals. (For a while, they used IA-32e (enhanced) for x86-64-specific features / modes. I forget if they still do that or if they call it x86-64.)

        – Peter Cordes
        Apr 17 at 1:29












      • 5





        That doesn't mean software though, it means the hardware the software is built for. Consider a rack of fan belts labelled "Ford Focus", "Nissan Micra", etc.. You're not saying the fan belt is a Nissan Micra, only that it's suitable for use on one.

        – Graham
        Apr 16 at 7:17











      • Only MS Windows uses x86 to specifically exclude x86-64. In other contexts, like computer architecture discussion, x86 includes all CPUs that are backwards-compatible with 8086, with the usual assumption that modern CPUs are running in their best mode (x86-64 long mode). Or at least no implication of 32-bit mode specifically. e.g. "x86 has efficient unaligned loads, but ARM or MIPS doesn't always". But I'd certainly say "modern x86 has 16 integer and 16 or 32 vector registers". (Which is only true in long mode, and I'm of course talking about architectural registers, not physical.)

        – Peter Cordes
        Apr 17 at 1:05












      • TL:DR: In other contexts (outside of MS Windows software), x86-64 is a subset of x86, not a disjoint set.

        – Peter Cordes
        Apr 17 at 1:08











      • A term that unambiguously means 32-bit x86 is "IA-32". en.wikipedia.org/wiki/IA-32. Intel uses that in their ISA manuals. (For a while, they used IA-32e (enhanced) for x86-64-specific features / modes. I forget if they still do that or if they call it x86-64.)

        – Peter Cordes
        Apr 17 at 1:29







      5




      5





      That doesn't mean software though, it means the hardware the software is built for. Consider a rack of fan belts labelled "Ford Focus", "Nissan Micra", etc.. You're not saying the fan belt is a Nissan Micra, only that it's suitable for use on one.

      – Graham
      Apr 16 at 7:17





      That doesn't mean software though, it means the hardware the software is built for. Consider a rack of fan belts labelled "Ford Focus", "Nissan Micra", etc.. You're not saying the fan belt is a Nissan Micra, only that it's suitable for use on one.

      – Graham
      Apr 16 at 7:17













      Only MS Windows uses x86 to specifically exclude x86-64. In other contexts, like computer architecture discussion, x86 includes all CPUs that are backwards-compatible with 8086, with the usual assumption that modern CPUs are running in their best mode (x86-64 long mode). Or at least no implication of 32-bit mode specifically. e.g. "x86 has efficient unaligned loads, but ARM or MIPS doesn't always". But I'd certainly say "modern x86 has 16 integer and 16 or 32 vector registers". (Which is only true in long mode, and I'm of course talking about architectural registers, not physical.)

      – Peter Cordes
      Apr 17 at 1:05






      Only MS Windows uses x86 to specifically exclude x86-64. In other contexts, like computer architecture discussion, x86 includes all CPUs that are backwards-compatible with 8086, with the usual assumption that modern CPUs are running in their best mode (x86-64 long mode). Or at least no implication of 32-bit mode specifically. e.g. "x86 has efficient unaligned loads, but ARM or MIPS doesn't always". But I'd certainly say "modern x86 has 16 integer and 16 or 32 vector registers". (Which is only true in long mode, and I'm of course talking about architectural registers, not physical.)

      – Peter Cordes
      Apr 17 at 1:05














      TL:DR: In other contexts (outside of MS Windows software), x86-64 is a subset of x86, not a disjoint set.

      – Peter Cordes
      Apr 17 at 1:08





      TL:DR: In other contexts (outside of MS Windows software), x86-64 is a subset of x86, not a disjoint set.

      – Peter Cordes
      Apr 17 at 1:08













      A term that unambiguously means 32-bit x86 is "IA-32". en.wikipedia.org/wiki/IA-32. Intel uses that in their ISA manuals. (For a while, they used IA-32e (enhanced) for x86-64-specific features / modes. I forget if they still do that or if they call it x86-64.)

      – Peter Cordes
      Apr 17 at 1:29





      A term that unambiguously means 32-bit x86 is "IA-32". en.wikipedia.org/wiki/IA-32. Intel uses that in their ISA manuals. (For a while, they used IA-32e (enhanced) for x86-64-specific features / modes. I forget if they still do that or if they call it x86-64.)

      – Peter Cordes
      Apr 17 at 1:29











      6














      Intel products were numbered. For example, their first microprocessor was the 4-bit Intel 4004, which was coupled with the 4001 ROM, 4002 RAM, and 4003 shift register. The start denoted the series, and the last digit denoted the specific part.



      Later, the intel 8008 came along, which was an 8-bit microprocessor. This was succeeded by the 8080, which was then replaced by the 8085, which was then replaced by the 8086.



      After the 8086, processors started taking on the format of 80x86, with x being a number such as 80186, 80286, 80386, etc. They were backwards compatible with one-another, and modern computers still boot into 16-bit-mode. As Intel continued rolling out processors, they began to be referred to as Intel 386 or Intel 486 rather than Intel 80386. This is how the terms 'i386' and 'i586' came into play. As they were based on the same architecture, they were called Intel x86, where x refers to a number. They also came with coprocessors that had a last number of '7', such as 80387, and as such we also have x87.






      share|improve this answer










      New contributor




      Ender - Joshua Pritsker is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.















      • 1





        You have a typo: 8186->80186. Too small for me to edit myself.

        – Martin Bonner
        Apr 16 at 10:20











      • @MartinBonner Thanks, fixed.

        – Ender - Joshua Pritsker
        Apr 16 at 21:27











      • Why does "i586" refer to Pentium 1, and why does "i686" refer to Pentium Pro? - i586 is a made-up term. The CPU model numbers were 80500 through 80502 for the P5 / P54C / P55C microarchitectures. But yeah there was a 5 in there, so i586 is semi justified for convenience and consistency.

        – Peter Cordes
        Apr 17 at 1:19















      6














      Intel products were numbered. For example, their first microprocessor was the 4-bit Intel 4004, which was coupled with the 4001 ROM, 4002 RAM, and 4003 shift register. The start denoted the series, and the last digit denoted the specific part.



      Later, the intel 8008 came along, which was an 8-bit microprocessor. This was succeeded by the 8080, which was then replaced by the 8085, which was then replaced by the 8086.



      After the 8086, processors started taking on the format of 80x86, with x being a number such as 80186, 80286, 80386, etc. They were backwards compatible with one-another, and modern computers still boot into 16-bit-mode. As Intel continued rolling out processors, they began to be referred to as Intel 386 or Intel 486 rather than Intel 80386. This is how the terms 'i386' and 'i586' came into play. As they were based on the same architecture, they were called Intel x86, where x refers to a number. They also came with coprocessors that had a last number of '7', such as 80387, and as such we also have x87.






      share|improve this answer










      New contributor




      Ender - Joshua Pritsker is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.















      • 1





        You have a typo: 8186->80186. Too small for me to edit myself.

        – Martin Bonner
        Apr 16 at 10:20











      • @MartinBonner Thanks, fixed.

        – Ender - Joshua Pritsker
        Apr 16 at 21:27











      • Why does "i586" refer to Pentium 1, and why does "i686" refer to Pentium Pro? - i586 is a made-up term. The CPU model numbers were 80500 through 80502 for the P5 / P54C / P55C microarchitectures. But yeah there was a 5 in there, so i586 is semi justified for convenience and consistency.

        – Peter Cordes
        Apr 17 at 1:19













      6












      6








      6







      Intel products were numbered. For example, their first microprocessor was the 4-bit Intel 4004, which was coupled with the 4001 ROM, 4002 RAM, and 4003 shift register. The start denoted the series, and the last digit denoted the specific part.



      Later, the intel 8008 came along, which was an 8-bit microprocessor. This was succeeded by the 8080, which was then replaced by the 8085, which was then replaced by the 8086.



      After the 8086, processors started taking on the format of 80x86, with x being a number such as 80186, 80286, 80386, etc. They were backwards compatible with one-another, and modern computers still boot into 16-bit-mode. As Intel continued rolling out processors, they began to be referred to as Intel 386 or Intel 486 rather than Intel 80386. This is how the terms 'i386' and 'i586' came into play. As they were based on the same architecture, they were called Intel x86, where x refers to a number. They also came with coprocessors that had a last number of '7', such as 80387, and as such we also have x87.






      share|improve this answer










      New contributor




      Ender - Joshua Pritsker is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.










      Intel products were numbered. For example, their first microprocessor was the 4-bit Intel 4004, which was coupled with the 4001 ROM, 4002 RAM, and 4003 shift register. The start denoted the series, and the last digit denoted the specific part.



      Later, the intel 8008 came along, which was an 8-bit microprocessor. This was succeeded by the 8080, which was then replaced by the 8085, which was then replaced by the 8086.



      After the 8086, processors started taking on the format of 80x86, with x being a number such as 80186, 80286, 80386, etc. They were backwards compatible with one-another, and modern computers still boot into 16-bit-mode. As Intel continued rolling out processors, they began to be referred to as Intel 386 or Intel 486 rather than Intel 80386. This is how the terms 'i386' and 'i586' came into play. As they were based on the same architecture, they were called Intel x86, where x refers to a number. They also came with coprocessors that had a last number of '7', such as 80387, and as such we also have x87.







      share|improve this answer










      New contributor




      Ender - Joshua Pritsker is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      share|improve this answer



      share|improve this answer








      edited Apr 16 at 21:27





















      New contributor




      Ender - Joshua Pritsker is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      answered Apr 16 at 5:22









      Ender - Joshua PritskerEnder - Joshua Pritsker

      613




      613




      New contributor




      Ender - Joshua Pritsker is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.





      New contributor





      Ender - Joshua Pritsker is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.






      Ender - Joshua Pritsker is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.







      • 1





        You have a typo: 8186->80186. Too small for me to edit myself.

        – Martin Bonner
        Apr 16 at 10:20











      • @MartinBonner Thanks, fixed.

        – Ender - Joshua Pritsker
        Apr 16 at 21:27











      • Why does "i586" refer to Pentium 1, and why does "i686" refer to Pentium Pro? - i586 is a made-up term. The CPU model numbers were 80500 through 80502 for the P5 / P54C / P55C microarchitectures. But yeah there was a 5 in there, so i586 is semi justified for convenience and consistency.

        – Peter Cordes
        Apr 17 at 1:19












      • 1





        You have a typo: 8186->80186. Too small for me to edit myself.

        – Martin Bonner
        Apr 16 at 10:20











      • @MartinBonner Thanks, fixed.

        – Ender - Joshua Pritsker
        Apr 16 at 21:27











      • Why does "i586" refer to Pentium 1, and why does "i686" refer to Pentium Pro? - i586 is a made-up term. The CPU model numbers were 80500 through 80502 for the P5 / P54C / P55C microarchitectures. But yeah there was a 5 in there, so i586 is semi justified for convenience and consistency.

        – Peter Cordes
        Apr 17 at 1:19







      1




      1





      You have a typo: 8186->80186. Too small for me to edit myself.

      – Martin Bonner
      Apr 16 at 10:20





      You have a typo: 8186->80186. Too small for me to edit myself.

      – Martin Bonner
      Apr 16 at 10:20













      @MartinBonner Thanks, fixed.

      – Ender - Joshua Pritsker
      Apr 16 at 21:27





      @MartinBonner Thanks, fixed.

      – Ender - Joshua Pritsker
      Apr 16 at 21:27













      Why does "i586" refer to Pentium 1, and why does "i686" refer to Pentium Pro? - i586 is a made-up term. The CPU model numbers were 80500 through 80502 for the P5 / P54C / P55C microarchitectures. But yeah there was a 5 in there, so i586 is semi justified for convenience and consistency.

      – Peter Cordes
      Apr 17 at 1:19





      Why does "i586" refer to Pentium 1, and why does "i686" refer to Pentium Pro? - i586 is a made-up term. The CPU model numbers were 80500 through 80502 for the P5 / P54C / P55C microarchitectures. But yeah there was a 5 in there, so i586 is semi justified for convenience and consistency.

      – Peter Cordes
      Apr 17 at 1:19











      4














      It just means any processor compatible with same architecture.
      So it includes 8088, 8086, 80186, 80286, 80386, 80486, Pentium, etc..






      share|improve this answer



























        4














        It just means any processor compatible with same architecture.
        So it includes 8088, 8086, 80186, 80286, 80386, 80486, Pentium, etc..






        share|improve this answer

























          4












          4








          4







          It just means any processor compatible with same architecture.
          So it includes 8088, 8086, 80186, 80286, 80386, 80486, Pentium, etc..






          share|improve this answer













          It just means any processor compatible with same architecture.
          So it includes 8088, 8086, 80186, 80286, 80386, 80486, Pentium, etc..







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Apr 15 at 16:53









          JustmeJustme

          3973




          3973





















              3














              The name "x86" was never 'given' or 'desiged' this way. If I remember correctly, it more or less evolved as a convenient abbriviation for a whole range of compatible processors.



              Back in the day when PC's became populair, it was important that your PC was "IBM Compatible". This meant, among other things, your PC must have an Intel 8086 or an 8088. Later, when Intel released more powerfull processors such as the (rare) 80186 or (popular) 80286, it was still important that your PC was just "MS-Dos" or "IBM Compatible". The 80286 was just a faster processor. It had a protected mode feature, but little software actually used or even required that.



              The next step was the 80386. This was an improvement over the 80286 because it had a mode that provided full backward compatibility with 8086 programs. Operating systems such as OS/2, DesqView and MS-Windows used this mode to provide backward compatibility whith existing software. Other operating systems such as Linux and *BSD's designed for PC hardware also depended on some new features of the 80386 without acutally providing direct compatibilitiy with existing MS-DOS software. All these systems required a 80386 processor.



              Then came the 80486. An even faster and more powerfull processor but mainly backward compatible with the '386. So if you bought a '486 you could still run software designed for the '386. The package would say 'needs a 386 or better' or 'needs 386 or 486'



              Along came the 80586 or Pentium. And then the Pentium Pro, also known as 80686...



              By this time software developers got tired of listing all possible numbers and since most software was still written to be able to run on a '386, the whole list of numbers was abbriviated to just "x86". This later became synonymous with "32 bit", because the 80386 was a 32 bit processor and hence software that's written for 'x86' is 32-bit software.






              share|improve this answer








              New contributor




              Oscar is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
              Check out our Code of Conduct.
























                3














                The name "x86" was never 'given' or 'desiged' this way. If I remember correctly, it more or less evolved as a convenient abbriviation for a whole range of compatible processors.



                Back in the day when PC's became populair, it was important that your PC was "IBM Compatible". This meant, among other things, your PC must have an Intel 8086 or an 8088. Later, when Intel released more powerfull processors such as the (rare) 80186 or (popular) 80286, it was still important that your PC was just "MS-Dos" or "IBM Compatible". The 80286 was just a faster processor. It had a protected mode feature, but little software actually used or even required that.



                The next step was the 80386. This was an improvement over the 80286 because it had a mode that provided full backward compatibility with 8086 programs. Operating systems such as OS/2, DesqView and MS-Windows used this mode to provide backward compatibility whith existing software. Other operating systems such as Linux and *BSD's designed for PC hardware also depended on some new features of the 80386 without acutally providing direct compatibilitiy with existing MS-DOS software. All these systems required a 80386 processor.



                Then came the 80486. An even faster and more powerfull processor but mainly backward compatible with the '386. So if you bought a '486 you could still run software designed for the '386. The package would say 'needs a 386 or better' or 'needs 386 or 486'



                Along came the 80586 or Pentium. And then the Pentium Pro, also known as 80686...



                By this time software developers got tired of listing all possible numbers and since most software was still written to be able to run on a '386, the whole list of numbers was abbriviated to just "x86". This later became synonymous with "32 bit", because the 80386 was a 32 bit processor and hence software that's written for 'x86' is 32-bit software.






                share|improve this answer








                New contributor




                Oscar is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                Check out our Code of Conduct.






















                  3












                  3








                  3







                  The name "x86" was never 'given' or 'desiged' this way. If I remember correctly, it more or less evolved as a convenient abbriviation for a whole range of compatible processors.



                  Back in the day when PC's became populair, it was important that your PC was "IBM Compatible". This meant, among other things, your PC must have an Intel 8086 or an 8088. Later, when Intel released more powerfull processors such as the (rare) 80186 or (popular) 80286, it was still important that your PC was just "MS-Dos" or "IBM Compatible". The 80286 was just a faster processor. It had a protected mode feature, but little software actually used or even required that.



                  The next step was the 80386. This was an improvement over the 80286 because it had a mode that provided full backward compatibility with 8086 programs. Operating systems such as OS/2, DesqView and MS-Windows used this mode to provide backward compatibility whith existing software. Other operating systems such as Linux and *BSD's designed for PC hardware also depended on some new features of the 80386 without acutally providing direct compatibilitiy with existing MS-DOS software. All these systems required a 80386 processor.



                  Then came the 80486. An even faster and more powerfull processor but mainly backward compatible with the '386. So if you bought a '486 you could still run software designed for the '386. The package would say 'needs a 386 or better' or 'needs 386 or 486'



                  Along came the 80586 or Pentium. And then the Pentium Pro, also known as 80686...



                  By this time software developers got tired of listing all possible numbers and since most software was still written to be able to run on a '386, the whole list of numbers was abbriviated to just "x86". This later became synonymous with "32 bit", because the 80386 was a 32 bit processor and hence software that's written for 'x86' is 32-bit software.






                  share|improve this answer








                  New contributor




                  Oscar is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                  Check out our Code of Conduct.










                  The name "x86" was never 'given' or 'desiged' this way. If I remember correctly, it more or less evolved as a convenient abbriviation for a whole range of compatible processors.



                  Back in the day when PC's became populair, it was important that your PC was "IBM Compatible". This meant, among other things, your PC must have an Intel 8086 or an 8088. Later, when Intel released more powerfull processors such as the (rare) 80186 or (popular) 80286, it was still important that your PC was just "MS-Dos" or "IBM Compatible". The 80286 was just a faster processor. It had a protected mode feature, but little software actually used or even required that.



                  The next step was the 80386. This was an improvement over the 80286 because it had a mode that provided full backward compatibility with 8086 programs. Operating systems such as OS/2, DesqView and MS-Windows used this mode to provide backward compatibility whith existing software. Other operating systems such as Linux and *BSD's designed for PC hardware also depended on some new features of the 80386 without acutally providing direct compatibilitiy with existing MS-DOS software. All these systems required a 80386 processor.



                  Then came the 80486. An even faster and more powerfull processor but mainly backward compatible with the '386. So if you bought a '486 you could still run software designed for the '386. The package would say 'needs a 386 or better' or 'needs 386 or 486'



                  Along came the 80586 or Pentium. And then the Pentium Pro, also known as 80686...



                  By this time software developers got tired of listing all possible numbers and since most software was still written to be able to run on a '386, the whole list of numbers was abbriviated to just "x86". This later became synonymous with "32 bit", because the 80386 was a 32 bit processor and hence software that's written for 'x86' is 32-bit software.







                  share|improve this answer








                  New contributor




                  Oscar is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                  Check out our Code of Conduct.









                  share|improve this answer



                  share|improve this answer






                  New contributor




                  Oscar is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                  Check out our Code of Conduct.









                  answered 2 days ago









                  OscarOscar

                  312




                  312




                  New contributor




                  Oscar is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                  Check out our Code of Conduct.





                  New contributor





                  Oscar is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                  Check out our Code of Conduct.






                  Oscar is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                  Check out our Code of Conduct.





















                      1














                      Practically x86 is shortening for "80386 or 80486 running in 32-bit mode". It comes from 8086/186/286+ line but Win32 cannot run on CPU below 386. After 80486 the 80*86 scheme was changed to Pentium[N] and AMD [model].






                      share|improve this answer








                      New contributor




                      i486 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                      Check out our Code of Conduct.




















                      • Why does "i586" refer to Pentium 1, and why does "i686" refer to Pentium Pro? explains that in casual usage, i586 and i686 were used for somewhat justifiable reasons. x86 definitely does not exclude modern CPUs like Skylake! In most contexts other than MS Windows (e.g. CPU architecture discussion) it also doesn't mean specifically 32-bit mode.

                        – Peter Cordes
                        Apr 17 at 1:12















                      1














                      Practically x86 is shortening for "80386 or 80486 running in 32-bit mode". It comes from 8086/186/286+ line but Win32 cannot run on CPU below 386. After 80486 the 80*86 scheme was changed to Pentium[N] and AMD [model].






                      share|improve this answer








                      New contributor




                      i486 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                      Check out our Code of Conduct.




















                      • Why does "i586" refer to Pentium 1, and why does "i686" refer to Pentium Pro? explains that in casual usage, i586 and i686 were used for somewhat justifiable reasons. x86 definitely does not exclude modern CPUs like Skylake! In most contexts other than MS Windows (e.g. CPU architecture discussion) it also doesn't mean specifically 32-bit mode.

                        – Peter Cordes
                        Apr 17 at 1:12













                      1












                      1








                      1







                      Practically x86 is shortening for "80386 or 80486 running in 32-bit mode". It comes from 8086/186/286+ line but Win32 cannot run on CPU below 386. After 80486 the 80*86 scheme was changed to Pentium[N] and AMD [model].






                      share|improve this answer








                      New contributor




                      i486 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                      Check out our Code of Conduct.










                      Practically x86 is shortening for "80386 or 80486 running in 32-bit mode". It comes from 8086/186/286+ line but Win32 cannot run on CPU below 386. After 80486 the 80*86 scheme was changed to Pentium[N] and AMD [model].







                      share|improve this answer








                      New contributor




                      i486 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                      Check out our Code of Conduct.









                      share|improve this answer



                      share|improve this answer






                      New contributor




                      i486 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                      Check out our Code of Conduct.









                      answered Apr 16 at 12:21









                      i486i486

                      1114




                      1114




                      New contributor




                      i486 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                      Check out our Code of Conduct.





                      New contributor





                      i486 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                      Check out our Code of Conduct.






                      i486 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                      Check out our Code of Conduct.












                      • Why does "i586" refer to Pentium 1, and why does "i686" refer to Pentium Pro? explains that in casual usage, i586 and i686 were used for somewhat justifiable reasons. x86 definitely does not exclude modern CPUs like Skylake! In most contexts other than MS Windows (e.g. CPU architecture discussion) it also doesn't mean specifically 32-bit mode.

                        – Peter Cordes
                        Apr 17 at 1:12

















                      • Why does "i586" refer to Pentium 1, and why does "i686" refer to Pentium Pro? explains that in casual usage, i586 and i686 were used for somewhat justifiable reasons. x86 definitely does not exclude modern CPUs like Skylake! In most contexts other than MS Windows (e.g. CPU architecture discussion) it also doesn't mean specifically 32-bit mode.

                        – Peter Cordes
                        Apr 17 at 1:12
















                      Why does "i586" refer to Pentium 1, and why does "i686" refer to Pentium Pro? explains that in casual usage, i586 and i686 were used for somewhat justifiable reasons. x86 definitely does not exclude modern CPUs like Skylake! In most contexts other than MS Windows (e.g. CPU architecture discussion) it also doesn't mean specifically 32-bit mode.

                      – Peter Cordes
                      Apr 17 at 1:12





                      Why does "i586" refer to Pentium 1, and why does "i686" refer to Pentium Pro? explains that in casual usage, i586 and i686 were used for somewhat justifiable reasons. x86 definitely does not exclude modern CPUs like Skylake! In most contexts other than MS Windows (e.g. CPU architecture discussion) it also doesn't mean specifically 32-bit mode.

                      – Peter Cordes
                      Apr 17 at 1:12










                      user12302 is a new contributor. Be nice, and check out our Code of Conduct.









                      draft saved

                      draft discarded


















                      user12302 is a new contributor. Be nice, and check out our Code of Conduct.












                      user12302 is a new contributor. Be nice, and check out our Code of Conduct.











                      user12302 is a new contributor. Be nice, and check out our Code of Conduct.














                      Thanks for contributing an answer to Retrocomputing Stack Exchange!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid


                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.

                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function ()
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fretrocomputing.stackexchange.com%2fquestions%2f9685%2fwhat-does-the-x-in-x86-represent%23new-answer', 'question_page');

                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      Sum ergo cogito? 1 nng

                      419 nièngy_Soadمي 19bal1.5o_g

                      Queiggey Chernihivv 9NnOo i Zw X QqKk LpB