go to Xputer pages

homepage  |  Impressum   |  Survey   |  Sitemap   | 
last update
June 2001, May 2012

The Wrong Roadmap


TU Kaiserslautern

 Karlsruhe Institute of Technology (KIT) homepageInstitut für Technik der Informationsverarbeitung (ITIV) des Karlsruher Institut für Technologie (KIT)

Xputers | wrongroadmap | anti-machine | configware | morphware | flowware | data.streams | KressArray  |

Xputers (in German Language) | Auto-sequencing memory (asM) |      Generic Address Generator (GAG) | Reinvent Computing

Universidade de Brasilia



The HPRC* pages @ TU Kaiserslautern

Computing follows the Wrong Roadmap!         Why?   Click here

These HPRC pages are about a route to Reinvent Compting. This term is not new. See the keynote by Burton Smith (former Cray CTO). - mirror

Why Reinvent Compting? Pse, study Thomas Sterling's interview entitled: 'I Think We Will Never Reach Zettaflops'. See  HPCwire. Thomas Sterling takes us through some of the most critical developments in high performance computing, explaining why the transition to exascale is going to be very different than the ones in the past. I agree. However, I believe, we will reach Zetaflops --- by  including Reconfigurable Computing and Datastream-based Computing.

The Computing Crisis

Originally using microprocessors had been quite simple. Each new microprocessor generation came to market with a much higher clock frequency. The computing performance growth was roughly the same as the growth rates of Moore's Law. Since there was only a single PU on a chip, so that parallel programming skills had not been a dramatic issue. This paradise broke down in the year 2004 by a strategy change of the microprocessor hardware industry. We call this the manycore (or multicore) strategy. Now, instead of the number of Hertz the number of (slower) processors on a chip is growing. No the growth curve of processing performance is disappointingly much more flat than the Gordon Moore curve.

The Tunnel View Syndrome [1]

 One reason of this is the dramatic breakdown of programmer productivity is the tunnel view syndrome which restricts the qualification mainly to a single asbstraction level within a single paradigm. Now a new mix of programmer qualifications is needed, also including programming for much more sophisticated patterns of parallelism. We have to reinvent Computer Science Education. It's time for not only to bridge the hardware/ software chasm, but to move the gap between several paradigms, and between abstraction levels.

The Power Wall:   Click here

Another problem is the "power wall". For exascale supercomputers expected to be feasible in 2018 the experts predicted for a machine of that kind an electricity consumption of up to the double of that of New York with 16 million people.

buy from Amazon: Diana Goerhinger's PhD Thesis

Coming back to Thomas Sterling: the von-Neumann-only mind set is reaching its end. Also a number of computer-related conference series adding the term "heterogeneous" to the conference name is pointing into that direction. Mike Flynn's parallelism taxonomy of computing systems is obsolete (fig. 1). For massively saving power we cannot avoid to include reconfigurable computing.
The New Taxonomy

The taxonomy originally known from Mike Flynn (fig. 1 a) has become more complex (fig. 1 b). The green "r" indicates a reconfigurable version. The consequence is it, that we have to reinvent computing by establishing a multi-paradigm Methodology including:

1)  instruction-procedural programming (software on von Neumann paradigm),

2) data-procedural programming (of flowware based on the anti-machine paradigm), and

3) structural programming (configware-based) for reconfigurable platforms (morphware) like FPGAs

 A long time ago it has been demonstrated by anti-machine-based processors (e. g. xputers), that flowware is drastically less memory-cycle-hungry than software. Because there are such masses of legacy software aound, we cannot fully shift back to anti-hachine-only, like the Hollerith tabulator introduced 1884. http://hartenstein.de/EIS2/ We must go hetero since a half century of von-Neumann-only software sits squarely on top.

*) High Performance Reconfigurable Computing

Connected Thinking:

 We are forced to cover an even wider design space by including datastream-based computing (Fig. 2). We need a radical revolution of designer and  programmer education by avoiding the tunnel view syndrome [1].

Fig. 2; For Datastream-based Computing - Reiner's extensions by "noI " ("no instruction"): a) of Mike Flynn's Taxonomy, b) of Diana Goehringer's Taxonomy

[1] Reiner Hartenstein (keynote): Stonewalled Progress of Computing Efficiency: Why we must Reinvent Computing;  25th Symposium on Integrated Circuits and Systems Design (SBCCI 2012), August 30 - September 2, 3012, Brasilia, Brasil - http://xputer.de/UnB/Reiner-SBCCI-12.pdf
configware | morphware | flowware | data.streams | anti-machine | KressArray | Xputers   |  wrongroadmap   |  The von Neumann Syndrome
Microsoft research citation index   Google citation index

We must reinvent Computing [Burton Smith]                                           

The traditional Aristotelian world model of computing is obsolete. As long as the (von Neumann) CPU is the only center of the world, the crisis caused by the von Neumann Syndrome cannot be solved. We need a new roadmap. We must reinvent computing by a triple paradigm methodology to get away from the desastrous instruction-stream-only mind set. We must generalize "programming" by distinguishing between three diferent classes of programming sources:

1.)  "software" (controls instruction streams by a program counter)

2.)  "flowware" (controlling data streams by data counters)

3.)  "configware" (for structural programming of reconfigurable platforms like FPGAs)

We must master The Grand Challenge To Reinvent Computing. We urgently need A New World Model of Computing.

How to get the new roadmap ?

The need to reinvent computer science education is not the only problem. Already within a single abstraction level shifting from Flynn's to Goehringer's taxonomy adds more than one more dimension to the design space. Changing the number of boxes from 4 to 16 adds one dimension. But also the number of types of ressources has doubled, since for each one a reconfigurable version and a hardwired one is available. This is not only a challenge to programming, but also to the system architecture and their design environments. The new methodology to be developed within this deade also requires an increased number of abstraction levels to be covered by the design process. This massively increases the complexity to be mastered by programmers, designers and the design environments.The boom of panel discussions and keynote addresses indicates, that we currently have only a very rough first draft of a new roadmap. Probably it will take years of masssive efforts to obtain the complete roadmap. The section "Connected Thinking" (see above) shows the large design space we have to cover to solve all pending problems of programming multicore platforms and to meet the parallelization challenges of extreme scale high performance computing. We must bridge the gaps between abstraction levels and different paradigms. Avoiding the classical tunnel view models is definitely possible, which had been proved by the Mead-&-Conway revolution, which has been the most effective project in modern computing science. Also see [1].

Some Evangelist's Links:

The von Neumann syndrome

The tail is wagging the dog

The Watering Can Model (slide 16)

We need a Seismic Shift

Future Computer Systems

Xputer lab achievements

Xputer-related Literature

The Worst Mistake in the History of Computing

How (not) to Invent Something

CS suffers from the Tunnel Vision Syndrome

The biggest mistake in the history of EDA

The Leading Design Language in the 80ies

Multiplier Chip automatically generated from the Math Formula

click here:

click here:

Search Google (for the number of hits see the line " Results")
Search Bing (for the number of hits see the line " Results")
FPGA | "Reconfigurable Computing" | FPGA & "oil and gas" | FPGA & "automotive" | FPGA & "medical" | FPGA & "chemical" | FPGA & "bio" | FPGA & "defense" | FPGA & "physics" | FPGA & "molecular" | FPGA & "supercomputing" | FPGA & "HPC" | FPGA & "high performance computing" | GAG generic address generator | von Neumann syndrome | Map-oriented Machine | Map-oriented Programming Language PISA design rule check | Xputer paradigm | hardware description language KARL | FPGA | "Reconfigurable Computing" | FPGA & "oil" | FPGA & "gas" | FPGA & "automotive" | FPGA & "medical" | FPGA & "chemical" | FPGA & "bio" | FPGA & "defense" | FPGA & "physics" | FPGA & "molecular" | FPGA & "supercomputing" | FPGA & "HPC" | FPGA & "high performance computing" | GAG generic address generator | von Neumann syndrome | Map-oriented Machine | Map-oriented Programming Language | PISA design rule check | Xputer paradigm | hardware description language KARL |




[ anti-machine | configware | data-streams | flowware | home | impressum | kressarray | morphware | von Neumann Syndrome | wrongroadmap | xputer ]