11-26-2014, 07:28 AM
(11-26-2014, 12:34 AM)Magazorb Wrote: Nice response good sir, better then had anticipated apologies to be a dick like that though.Not certain of exactly what that was responding to, but yeah... that's kinda' the purpose of open forums
though a few notes (in no order):
It's a open forum, anyone can read and respond upone anything.
Magazorb Wrote:I never said their was only 3, nor did i directly imply, it did say for example.
I was joking about there being only three, as I said, I always have thought of x86 as one of the less efficient architectures.
Magazorb Wrote:my questioning of where you get things from wasn't 1 case. It's a you shoot us down for being wrong with evidence that doesn't prove we're wrong.
Other than the thing with my computer and it's forwarding, I can't think of anywhere else I've done this
Magazorb Wrote:About the microcode, it's done due to architecture which other architectures using similar to intel's without intel's licences, this is due to the fact they can't just randomly patient things that everyone uses openly, the difference in microcode despite how few thire is now, is due to the difference in micro architecture, Intel goes for a heavily vectorized approach meanwhile AMD doesn't (AMD architecture is really interesting when you break it down it's remarkable how they get so much performance from such few units on such old dies, though it's a really hard achivment to vectorize MIMDs to the extent that intel managed)
Magazorb Wrote:wasn't your argument that programmer are bad now days because of a lack of programming in microcode then those retards that just go derp derp derp, "maNz Iz ar pr0graMmar"? (i refer to coders with extremely poor programming skill, unfortunately this is about 70% of "programmers"
So, I talked to some people, I couldn't get the guy from HP, he was too busy designing a part for... that one thing ('cuz that's not vague), but I did get a hold of the one from Intel, who then skype'd his friend that works across the street at AMD (they used to work together across the other street at HP) and we all had a little discussion.
Microcoding is done because the amount of die space for a processor that could actually perform all 1200 IA-86 operations in hardware would be entirely redundant (to which I asked why IA-86 is so damn redundant, they didn't have an answer), a few hundred times larger, require thousands of times the power, and be a few hundred (the same few hundred as before) times slower due to internal bussing delays. (Actually, has HP announced anything really really big (besides job termination) in like... the last year? I'll mention it if they have.)
Instead, microcoding is used to compile the IA-86 assembly into machine code that the individual analytic ports (AMD calls them SP or something like that) use. Using haswell as an example, each core has eight of these ports, and each port has a certain range of operations they can perform. For Intel, the actual clock rates on the ports vary depending upon the size of the IA-86 instruction received and the amount of microcoding and port sharing the operation requires. Intel calls this Turbo boost or something like that. Due to things and stuff, AMD can't have the same microcoding for legal reasons, and doesn't build processors with the same internal machine code anyway (in general, microcoding is a lot like a runtime compiler). As you said, Intel focuses on SIMD and parallelism, AMD... didn't really seem to have a focus (I couldn't get much from him, probably because he didn't know me) all I could really get was that their goal was to just integrate everything as one chip that does everything.
Now, both agreed that microcoding is sometimes idiotic. In all honesty, how often are you going to need to perform a double triple vector string back-flip shift (this is not a real x86 opcode), which they just happen to have the opcode for, if you can even remember it exists when you get to it? At some time in the late 70's some architect realized that he could just code the double triple vector string back-flip shift himself and not have the CPU waste time converting that abstract opcode into the smaller micro-ops that weren't that hard to code, and RISC was born. The idea being that microcode was slowing down processors, and there wasn't really any reason why the assembly programmer or a compiler couldn't just write out what the opcode mapped to anyway. Apple used Sun Microsystems (their building and fab used to be behind HP, across the other other street, right next to HP's old fab, which is now Alligent) which had RISC processors, which stomped the piss out of x86 when it came to SIMD shit like... music, graphics, drawing, ex cetera, and turned apple into the computer of the jobless hippie artist. >.>
On the other hand, microcode is absolutely necessary for things like Intel's out of order engine and the reordering thing on the other side, which are areas where RISC processors suffer, giving rise to the the CISC processors' comeback after out of order execution was successfully implemented in the late 90's/early 2000's.
Magazorb Wrote:supporting code designed for another architecture really isn't that hard,as long as it isn't a after though their's many ways you can go about it, the ways you listed are some, but more excist.
If the differance was obvious enough you can even have HW figure it out, would imagen that you would know what the code is you're going to execute is though once you was running with a kernel (only speculation).
I was thinking of how to do it efficiently, and really the only way is if it doesn't have to check and doesn't have to alter all your files on install to include a header of some type. The only problem I see is that as far as the computer is concerned, it's all just 1's and 0's, and I'm sure that every part of ARM codes to something in x86 due to it's sheer size (obviously, that something is not the right thing), so I don't know how it could be done without some kind of microcode that runs through the program until it starts to question what exactly the program is doing or it finds something that is not a supported opcode. Defaulting to ARM makes that easier, because then it's just checking the first byte (I think) of every 32 bits and seeing if it can find an unsupported opcode instead of trying to decipher varying length opcodes and instruction sets that can be any length and all that IA-86 bull shit.
Also, the AMD guy would not comment when I asked about that, but (for the lulz) I'm sure Intel will try to revoke AMD's x86 licence if they make any headway on this. (heheh, capitalism)
Magazorb Wrote:that concludes the notes.
I must apologies for offending you, however to be self critical can help, I'll fully admit at times i forget to look into all the details before declaring laws XD my point although i did word it very aggressively and perhaps in hindsight offensive is that you should be willing to accept incorrectness in your points and have a more open mind about how people respond to what you say
Anyhow have a nice time.
It's all good.