User:Schol-r-lea/Itanic

From OSDev Wiki
Jump to: navigation, search

N.B.: This is one of a group of forum posts extracted so that the information in them can be re-worked into a less confrontational, more informational set of wiki pages. They are here only temporarily.


I was assuming that Zaval was making a joke, but apparently not. I don't know enough about the Itanium ISA to really judge, but from what little I saw of it, I wasn't impressed - I had the sense that they were combining a lot of old mistakes with several newer and shinier ones (not new, exactly, since they had already been tried and failed elsewhere, but newer).

I agree with Brendan that a slow migration would be the wisest path for Intel - the only way a new architecture would make sense at this point is if if gave such a dramatic boost in performance that it could emulate the x86 at the equivalent of current speeds (for whatever 'current' happens to be at the time). This worked for Apple in 1989 only because Motorola had done almost nothing to improve the 680x0 series' performance between 1978 and 1988 except increase the clock speed, and the PowerPC chips were already at higher clock speeds than the 68040, meaning that a PowerMac actually could be faster under emulation than a 68040 Mac running native. Trying to do the same with the most heavily optimized chips on the market (even now, the top i7 is faster per core than any of their competitors, though AMD has done a lot of catching up with the Ryzen it still isn't really a match, even if it has much better value for price) is really unlikely to succeed unless a radically new approach arises. Trust me, Subleq isn't nearly radical enough I mean something like quantum computing or something else that throws out the von Neuman and Harvard models out entirely, and possibly even abandons the RAM machine model in general.

As I say, as much as I hate the x86 personally, I really don't see it dying off yet... for reasons Intel doesn't control.

Who could change it? Microsoft, potentially, though they court an uprising if they try. Like Intel, Microsoft had been planning to move away from the x86 since the early 1990s (see the "Advanced Computing Environment" debacle, and the moves to put Windows on Alpha, MIPS, and partially, PowerPC). Despite the talk of 'secure computing' (never a real priority for Microsoft, simply because it was never enough of a priority for the average user so it bought them nothing), the primary reason for .NET is the same as the primary reason for the Java Virtual Machine - it was meant as a lever with which to pry users out of an existing hardware platform (the VAX and the older 386 Sun workstations, going to the Sun SPARC workstations, in the latter case, and from x86 to... something else, in the former).

It hasn't worked out either time. In the Java case, Sun themselves got overwhelmed by cheap PCs which gave a significant portion of the power of their workstations for a fraction of the cost, and they didn't have enough money to keep improving the workstations so even the performance edge was lost soon enough. In the Microsoft case, there simply hasn't been a successor for the x86 which was equal in sheer speed over the past fifteen years, never mind superior. They had their bags packed, but had nowhere to move to, because Intel and AMD could - and had to, in order to stay alive - pour ten times as much money into the x86 ISA as anyone else could for any other one, and no one else wanted a replacement so badly that they could afford to foot the bill.

Things may be changing soon, though. Right now, CPU performance in general has been on a plateau for the past ten years - it has improved, but only by very small increments compared to just the previous five. The limiting factors in performance are now elsewhere - disk access (hence the introduction of Optane and the ever rising popularity of SSDs despite their price), graphics (though that had reached a plateau of sorts as well, until the drive to go 4K gave it greater impetus again), main memory size and speed, and... well, malware.

Seriously, the main thing that hurts Windows (and Mac and Linux, to a much lesser extent due to being lower-priority targets, but still present) is the massive amounts of malicious and intrusive junk that accumulates in every PC starting from the first time it gets on a network. CPU speed can't hope to overcome a tsunami like that. Without constant, intensive maintenance, even the fastest i7 will run slower than an XT after a year of regular Internet use because it will be more clogged with crud than my arteries (and I am a lardball who is pushing fifty).

This means that MS has to be doing some serious thinking about getting a move on. I think that the ARM port of Windows might be for more than just mobile systems - my guess is that they believe that by performing an Apple-esque trapeze act to a new platform, and get more software running .NET rather than any native model, they can unclog their systems (temporarily) and tighten up the control of the platform at the same time. I doubt it will work - they would be throwing the hardcore PC gamers and other high-performance audiovisual users under the bus for at least a few years, and there are just too many legacy programs which they would be screwing over for them to drop x86 emulation, ever - I think it will prove too tempting for them not to try.

Besides, neither gamers nor artists are significant markets for MS, so no real loss there - 'serious' gamers are a tiny sliver of the market, while most musicians, video editors, and artists are on Macs anyway - though the stink it would raise would hurt their PR. They have already been slowly tightening the noose around pre-XP legacy software, so that, too isn't as big an issue as it would have been in, say, 2003.

It would be a band-aid over a sucking chest wound, but it would buy them a couple of years as a fix - and while Microsoft isn't as short-term oriented as most other big corpse-rat cultures, they still think 'strategic long-term planning' is 18 months to two years. It may sound crazy to us engineers and hackers, but given that this approach does seem to be working for them, I will leave it to the MBAs to decide if they have the right of it.

Even if not, that's two more years, and hey, maybe the horse will sing.

But we were talking about Subleq and the affect that overdosing on Hypeocaine has had on Geri's brain, right? Ooh, now with extra Kool-Aid (Bitter Almond flavored, of course)!


Oh, two minor notes regarding the Itanic: first, it was never actually intended for desktop systems, or even workstations - it was, and remains, primarily a server design. They had tentative plans to make workstation versions (HP actually did make one based on it) and eventually migrate it down to the desktop, but that was predicated on it being successful as a server system first. It hasn't been one for the most part, more due to poor marketing, vitriolic corporate alliances that quickly fell apart, and the whole RAMBUS disaster (the initial Itanium 2 chipset, the E8870, was intended to work only with RDRAM, as was the 840 chipset for the Pentium III and the 850 and 860 chipsets for the Pentium 4, and when RDRAM proved to be more expensive, less reliable, and less performant than existing SDRAM, they had to scramble to fix them) than due to any issues with the ISA or CPUs themselves.

Second, they still make them, hence the 'remains' part. There was even a new production series, Kittson, that started getting releases in February of [2017], according to Wicked-Pedo. It never took off, but it never actually went away, either, though it probably will soon now that the HP deal has ended.


Geri's reply: from the 97 tons of comments, i have time to react on one, and thats the topic of itanium (which someone obviously mentioned as joke). if we see try to try to abstract the architectures in some form of coordinate system using they complexity, subleq is probably is far of a side, and the another end, there is possibly the itanium. itanium is basically a parody of itself, it had 10x more transistors to show up the same performance as an equivalent x86. in theory, all of they instructions is designed to be very very efficient, but actually no one was able to write proper compilers, a real workload rarely needs for example 3 multiplications directly after each other (and thats why the SIMD units are not so efficient in high level codes aniway, intel should be aware of that). itanium is actually (while its much more complex and non-free than even the x86) meant not for servers, it meant to replace the whole x86, including on desktops. intel also convienced microsoft, and a lot of corporations to sit up on they fanwagoon, and as we seen, they failed miserabely. the conclusion of the story is that having extremely complex instruction sets with vliw and simd will just not solve the issues of computing, and i really hope that nobody will try to emulate that.



My Reply: I expect that both the Intel C compiler team and the GCC team would disagree. They both had solid compilers for it before it was actually on the market, and Linux was successfully ported in less than six months. Who didn't have compilers? Anyone who was waiting to see if it would get picked up by anyone other than HP, which was pretty much everyone else - including Microsoft.

IRT SIMD, it is a very common thing in certain workloads, including things like spreadsheets and graphics.

SIMD is meant for two things: high-performance number crunching, and graphics processing. The move to push video processing (back) onto GPUs, which was just getting going at the time (cycle of reincarnation, yo), made SIMD a lot less useful for desktop and workstation systems, but it was and remains a real boon for HPC - where it is known as 'vector processing' and is absolutely essential for many heavy CPU-bound simulations (OK, real vec proc requires multiple parallel operations, but SIMD is at least a move in the right direction for it). Anything involving large vectors or matrices would benefit from it.

On the desktop? It's pretty much just in spreadsheets, and since you would need either a lot of hand-coded assembly, or a specialized compiler and and a language designed specifically to use the special operations, it probably wouldn't be used even for that initially.

Further clarification later: Before anyone questions my earlier statement about needing a special language for vector processing: yes, basic forms of automatic vectorization are a thing, but not enough of a thing for this, what with the general problem being NP-Hard and all.

Speaking of which, know what else is probably NP-Hard (assuming that it is tractable at all - I don't know if that has been demonstrated or not but I think it has been)? Optimizing the order of instruction in a UTM. Or anything else that has only a single data-dependent flow of control operation, such as, say, an OISC. Mind you, this applies to RAM machines in general, but having only a single instruction does the compiler devs no favors.

While RISC is generally easier to target than CISC, the main reasons are from orthogonality, regularity, and expanded register files. While OISC is regular to an extreme, it is not orthogonal - it can't be, since all operations look and behave exactly the same. And an OISC has no explicit registers.

In fact, I would be really curious to see just how this C compiler of yours works... C is a really, really good fit for a register machine, but a terrible one for an OISC. Not sure what language would be a good fit for one (something in the Constraint or Dataflow paradigms, maybe, or something for declaratively defining finite state automata), but C (and most of the other Algol family languages) isn't it.

Personal tools
Namespaces
Variants
Actions
Navigation
About
Toolbox