Parsytec Explained

Parsytec
Company Type:Public
Foundation:1985
Location City:Aachen, NRW
Location Country:Germany
Area Served:North America, South America, Europe, Asia Pacific
Founder:Falk-Dietrich Kübler, Gerhard Peise, Bernd Wolf
Services:Surface inspection systems
Url:http://www.parsytec.de
Language:German

Isra Vision Parsytec AG, a subsidiary of Isra Vision, was originally founded in 1985 as Parsytec (parallel system technology) in Aachen, Germany.

Parsytec gained recognition in the late 1980s and early 1990s as a manufacturer of transputer-based parallel systems. Its product lineup ranged from single transputer plug-in boards for IBM PCs to large, massively parallel systems with thousands of transputers (or processors), such as the Parsytec GC. Some sources describe the latter as ultracomputer-sized, scalable multicomputers (smC).[1] [2]

As part of ISRA VISION AG, the company now focuses on solutions in the machine vision and industrial image processing sectors. ISRA Parsytec products are primarily used for quality and surface inspection, particularly in the metal and paper industries.

History

Parsytec was founded in 1985 in Aachen, Germany, by Falk-Dietrich Kübler, Gerhard H. Peise, and Bernd Wolff, with an 800,000 DM grant from the Federal Ministry for Research and Technology (BMFT).[3]

Unlike SUPRENUM, Parsytec focused its systems, particularly in pattern recognition, on industrial applications such as surface inspection. As a result, the company not only captured a significant market share in European academia but also attracted numerous industrial customers, including many outside Germany. By 1988, exports accounted for approximately one-third of Parsytec's revenue. The company's turnover figures were as follows: zero in 1985, 1.5 million DM in 1986, 5.2 million DM in 1988, 9 million DM in 1989, 15 million DM in 1990, and 17 million USD in 1991.

To allow Parsytec to focus on research and development, a separate entity, ParaCom, was established to handle sales and marketing operations. While Parsytec/ParaCom maintained its headquarters in Aachen, Germany, it also operated subsidiary sales offices in Chemnitz (Germany), Southampton (United Kingdom), Chicago (USA), St Petersburg (Russia), and Moscow (Russia).[4] In Japan, Parsytec's machines were distributed by Matsushita.[3]

Between 1988 and 1994, Parsytec developed an impressive range of transputer-based computers, culminating in the "Parsytec GC" (GigaCluster). This system was available in configurations ranging from 64 to 16,384 transputers.[5]

Parsytec went public in mid-1999 with an initial public offering (IPO) on the German Stock Exchange in Frankfurt.

On 30 April 2006, founder Falk-D. Kübler left the company.[6]

In July 2007,[7] ISRA VISION AG acquired 52.6% of Parsytec AG.[8] The delisting of Parsytec shares from the stock market began in December of the same year, and since 18 April 2008, Parsytec shares have no longer been listed on the stock exchange.[9]

While Parsytec had a workforce of roughly 130 staff in the early 1990s, the ISRA VISION Group employed more than 500 people in 2012/2013.[10]

Today, the core business of ISRA Parsytec within the ISRA VISION Group is the development and distribution of surface inspection systems for strip products in the metal and paper industries.

Products/Computers

Parsytec's product range included:

In total, approximately 700 stand-alone systems (SC and GC) had been shipped.

Initially, Parsytec participated in the GPMIMD (General Purpose MIMD)[11] project under the umbrella of the ESPRIT program,[12] both of which were funded by the European Commission's Directorate for Science. However, after significant disagreements with other participants—Meiko, Parsys, Inmos, and Telmat—regarding the choice of a common physical architecture, Parsytec left the project and announced its own T9000-based machine, the GC. Due to Inmos' issues with the T9000, Parsytec was forced to switch to a system using a combination of Motorola MPC 601 CPUs and Inmos T805 processors. This led to the development of Parsytec's "hybrid" systems (e.g., GC/PP), where transputers were used as communication processors while the computational tasks were offloaded to the PowerPCs.

Parsytec's cluster systems were operated by an external workstation, typically a SUN workstation (e.g., Sun-4).[13]

There is considerable confusion regarding the names of Parsytec products. This is partly due to the architecture, but also because of the aforementioned unavailability of the Inmos T9000, which forced Parsytec to use the T805 and PowerPC processors instead. Systems equipped with PowerPC processors were given the prefix "Power."

The architecture of GC systems is based on self-contained GigaCubes. The basic architectural element of a Parsytec system was a cluster, which consisted, among other components, of four transputers/processors (i.e., a cluster is a node in the classical sense).

A GigaCube (sometimes referred to as a supernode or meganode)[14] consisted of four clusters (nodes), each with 16 Inmos T805 transputers (30 MHz), RAM (up to 4 MB per T805), and an additional redundant T805 (the 17th processor). It also included local link connections and four Inmos C004 routing chips. Hardware fault tolerance was achieved by linking each T805 to a different C004.[15] The unusual spelling of x'plorer led to variations like xPlorer, and the Gigacluster is sometimes referred to as the GigaCube or Grand Challenge.

Megaframe

Megaframe[16] [17] was the product name for a family of transputer-based parallel processing modules,[18] some of which could be used to upgrade an IBM PC.[19] As a standalone system, a Megaframe could hold up to ten processor modules. Different versions of the modules were available, such as one featuring a 32-bit transputer T414 with floating-point hardware (Motorola 68881), 1 MB of RAM (80 nanosecond access time), and a throughput of 10 MIPS, or one with four 16-bit transputers (T22x) with 64 kB of RAM. Additionally, cards for special features were offered, including a graphics processor with a resolution of 1280 x 1024 pixels and an I/O "cluster" with terminal and SCSI interfaces.[20]

Multicluster

The MultiCluster-1 series consisted of statically configurable systems that could be tailored to specific user requirements, such as the number of processors, amount of memory, I/O configuration, and system topology. The required processor topology could be configured using UniLink connections, fed through a special backplane. Additionally, four external sockets were provided.

Multicluster-2 used network configuration units (NCUs) that provided flexible, dynamically configurable interconnection networks. The multiuser environment could support up to eight users through Parsytec's multiple virtual architecture software. The NCU design was based on the Inmos crossbar switch, the C004, which offers full crossbar connectivity for up to 16 transputers. Each NCU, made of C004s, connected up to 96 UniLinks, linking internal as well as external transputers and other I/O subsystems. MultiCluster-2 allowed for the configuration of various fixed interconnection topologies, such as tree or mesh structures.[14]

SuperCluster

SuperCluster had a hierarchical, cluster-based design. A basic unit was a 16-transputer T800, fully connected cluster, and larger systems included additional levels of NCUs to form the necessary connections. The Network Configuration Manager (NCM) software controlled the NCUs and dynamically established the required connections. Each transputer could be equipped with 1 to 32 MB of dynamic RAM, with single-error correction and double-error detection.[14]

GigaCluster

The GigaCluster (GC) was a parallel computer produced in the early 1990s. A GigaCluster was composed of GigaCubes.

Designed for the Inmos T9000 transputers, the GigaCluster could never be launched as originally planned, as the Inmos T9000 transputers never made it to market on time. This led to the development of the GC/PP (PowerPlus), in which two Motorola MPC 601 (80 MHz) were used as the dedicated CPUs, supported by four Inmos T805 transputers (30 MHz).[21]

While the GC/PP was a hybrid system, the GCel ("entry level") was based solely on the T805.[22] [23] The GCel was designed to be upgradeable to the T9000 transputers (had they arrived in time), thus becoming a full GC. Since the T9000 was Inmos' evolutionary successor to the T800, the upgrade was planned to be simple and straightforward. This was because, firstly, both transputers shared the same instruction set, and secondly, they had a similar performance ratio of compute power to communication throughput. A theoretical speed-up factor of 10 was expected,[24] but in the end, it was never achieved.

The network structure of the GC was a two-dimensional lattice, with an inter-communication speed between the nodes (i.e., clusters in Parsytec's terminology) of 20 Mbit/s. For its time, the concept of the GC was exceptionally modular and scalable.

A so-called GigaCube was a module that was already a one gigaflop system and served as the building block for larger systems. A module (or "cube" in Parsytec's terminology) contained:

By combining modules (or cubes, respectively), one could theoretically connect up to 16,384 processors to create a very powerful system.

Typical installations included:

The two largest installations of the GC that were actually shipped had 1,024 processors (16 modules, with 64 transputers per module) and were operated at the data centers of the Universities of Cologne and Paderborn. In October 2004, the system at Paderborn was transferred to the Heinz Nixdorf Museums Forum,[25] where it is now inoperable.

The power consumption of a system with 1,024 processors was approximately 27 kW, and its weight was nearly a ton. In 1992, the system was priced at around 1.5 million DM. While the smaller versions, up to GC-3, were air-cooled, water cooling was mandatory for the larger systems.

In 1992, a GC with 1,024 processors ranked on the TOP500 list[26] of the world's fastest supercomputer installations. In Germany alone, it was the 22nd fastest computer.

In 1995, there were nine Parsytec computers on the TOP500 list, including two GC/PP 192 installations, which ranked 117th and 188th.[27]

In 1996, they still ranked 230th and 231st on the TOP500 list.[28] [29]

x'plorer

The x'plorer model came in two versions: The initial version featured 16 transputers, each with access to 4 MB of RAM, and was called x'plorer. Later, when Parsytec switched to the PPC architecture, it was renamed POWERx'plorer and featured 8 MPC 601 CPUs. Both models were housed in the same desktop case, designed by Via 4 Design.[30]

In any model, the x'plorer was essentially a single "slice" — which Parsytec referred to as a cluster[31] — of a GigaCube (PPC or Transputer), with the smallest version (GC-1) using 4 of these clusters. As a result, some referred to it as a "GC-0.25."[32]

The POWERx'plorer was based on 8 processing units arranged in a 2D mesh. Each processing unit included:

  1. One 80 MHz MPC 601 processor,
  2. 8 MB of local memory, and
  3. A transputer for establishing and maintaining communication links.[33]

Cognitive Computer

The Parsytec CC (Cognitive Computer) system[34] [35] [36] was an autonomous unit at the card rack level.

The CC card rack subsystem provided the system with its infrastructure, including power supply and cooling. The system could be configured as a standard 19-inch rack-mounted unit, which accepted various 6U plug-in modules.

The CC system[37] was a distributed memory, message-passing parallel computer and is globally classified in the MIMD category of parallel computers.

There were two different versions available:

In all CC systems, the nodes were directly connected to the same router, which implemented an active hardware 8x8 crossbar switch for up to 8 connections using the 40 MB/s high-speed link.

Regarding the CCe, the software was based on IBM's AIX 4.1 UNIX operating system, along with Parsytec's parallel programming environment, Embedded PARIX (EPX).[39] This setup combined a standard UNIX environment (including compilers, tools, and libraries) with an advanced software development environment. The system was integrated into the local area network using standard Ethernet. As a result, a CC node had a peak performance of 266 MFLOPS. The peak performance of the 8-node CC system installed at Geneva University Hospital was therefore 2.1 GFLOPS.[40]

Powermouse

Powermouse was another scalable system that consisted of modules and individual components. It was a straightforward extension of the x'plorer system.[38] Each module (dimensions: 9 cm x 21 cm x 45 cm) contained four MPC 604 processors (200/300 MHz) and 64 MB of RAM, achieving a peak performance of 2.4 GFLOPS. A separate communication processor (T425) equipped with 4 MB of RAM[41] controlled the data flow in four directions to other modules in the system. The bandwidth of a single node was 9 MB/s.

For about 35,000 DM, a basic system consisting of 16 CPUs (i.e., four modules) could provide a total computing power of 9.6 Gflop/s. As with all Parsytec products, Powermouse required a Sun Sparcstation as the front-end.

All software, including PARIX with C++ and Fortran 77 compilers and debuggers (alternatively providing MPI or PVM as user interfaces), was included.[42]

Operating system

The operating system used was PARIX (PARallel UnIX extensions)[43] – PARIXT8 for the T80x transputers and PARIXT9 for the T9000 transputers, respectively. Based on UNIX, PARIX[44] supported remote procedure calls and was compliant with the POSIX standard. PARIX provided UNIX functionality at the front-end (e.g., a Sun SPARCstation, which had to be purchased separately) with library extensions for the needs of the parallel system at the back-end, which was the Parsytec product itself (connected to the front-end for operation). The PARIX software package included components for the program development environment (compilers, tools, etc.) and the runtime environment (libraries). PARIX offered various types of synchronous and asynchronous communication.

In addition, Parsytec provided a parallel programming environment called Embedded PARIX (EPX).[39]

To develop parallel applications using EPX, data streams and function tasks were allocated to a network of nodes. The data handling between processors required only a few system calls. Standard routines for synchronous communication, such as send and receive, were available, as well as asynchronous system calls. The full set of EPX calls formed the EPX application programming interface (API). The destination for any message transfer was defined through a virtual channel that ended at any user-defined process. Virtual channels were managed by EPX and could be defined by the user. The actual message delivery system utilized the router.[40] Additionally, COSY (Concurrent Operating SYstem)[45] and Helios could also be run on the machines. Helios supported Parsytec's special reset mechanism out of the box.

See also

External links

Notes and References

  1. http://research.microsoft.com/en-us/um/people/gbell/cgb%20files/massively%20parallel%20computers%209210%20c.pdf Massively Parallel Computers: Why Not Parallel Computers for the Masses?
  2. http://authors.library.caltech.edu/12594/1/HOFsiamjsc96.pdf Alternating-Direction Line-Relaxation Methods on Multicomputers
  3. http://www.zeit.de/1988/41/duell-der-zahlenfresser Duell der Zahlenfresser
  4. http://www.new-npac.org/projects/cdroms/cewes-1999-06-vol1/nhse/hpccsurvey/orgs/parsytec/parsytec.html Parsytec GmbH
  5. http://www.geekdot.com/index.php?page=parsytec Parsytec
  6. http://www.parsytec.de/fileadmin/parsytec/files_content/Investors/Jahresabschluss_ParsytecAG_2006.pdf Annual Statement of Accounts 2006
  7. http://www.finanznachrichten.de/nachrichten-2007-07/8661112-isra-vision-uebernimmt-parsytec-009.htm ISRA Vision übernimmt Parsytec
  8. http://www.equinet-ag.de/index.php?id=101&tx_ttnews%5Btt_news%5D=681&cHash=19dd781342b7ba9b7bfea5968e1b4ff2 ISRA VISION AG - Erwerb der Mehrheit an der Parsytec AG
  9. http://www.parsytec.de/index.php?id=2880&L=0 Investor Relations at ISRA
  10. http://www.isravision.com/media/public/pdf2014/investor-relations/reports/isra_gb_2012_2013_en.pdf Annual Report 2012/2013
  11. http://cordis.europa.eu/search/index.cfm?fuseaction=proj.document&PJ_LANG=EN&PJ_RCN=295157 General-Purpose MIMD Machines
  12. http://cordis.europa.eu/search/index.cfm?fuseaction=prog.document&PG_RCN=176309 European programme (EEC) for research and development in information technologies (ESPRIT), 1984-1988
  13. https://research.microsoft.com/en-us/people/efp/pap95thesis.pdf A Framework for Characterising Parallel Systems for Performance Evaluation
  14. https://apps.dtic.mil/sti/pdfs/ADA285782.pdf ESN Information Bulletin 92-08
  15. https://apps.dtic.mil/sti/pdfs/ADA243118.pdf Hypecube Solutions for Conjugate Directions, J. E. Hartman (1991)
  16. http://www.classiccmp.org/transputer/documentation/parsytec/parsy_tpm1_ger.pdf MEGAFRAME TPM-1 - Hardware Documentation Ver. 1.2 (1987)
  17. http://www.classiccmp.org/transputer/documentation/parsytec/parsy_mtm2.pdf MEGAFRAME MTM-2 - Hardware Documentation Ver. 1.3 (1987)
  18. http://www.computerwoche.de/heftarchiv/1987/22/1159921/ MEGAFRAME Familie
  19. http://www.classiccmp.org/transputer/megaframe.htm Ram Meenakshisundaram's Transputer Home Page
  20. http://www.computerwoche.de/heftarchiv/1987/11/1158717/ Transputersystem ist frei konfigurierbar
  21. http://www.netlib.org/benchmark/top500/reports/report94/Architec/node29.html The Parsytec Power Plus
  22. https://archive.today/20121129195920/http://134.28.49.61/Inter/LiteraturE14.nsf/b60bd8daf7ea9cf5412567010038c69d/b2df8cc3fd0495d1c125728300612753?OpenDocument Programmierung und Anwendung der Parallelrechnersysteme Parsytec SC und Parsytec GC/PP
  23. http://www.iss.tu-darmstadt.de/publications/downloads/ess97.pdf Synthesizing massive parallel simulation systems to estimate switching activity in finite state machines
  24. http://www.geekdot.com/index.php?page=gigacube Gigacube
  25. http://www2.hnf.de/index_en.html Homepage of the Heinz Nixdorf Museum Forum
  26. http://www.top500.org TOP500 List
  27. https://web.archive.org/web/20101225071300/http://top500.org/static/lists/xml/TOP500_199506_all.xml Top500 List 1996
  28. http://ocw.mit.edu/courses/mathematics/18-337j-applied-parallel-computing-sma-5505-spring-2005/lecture-notes/chapter_6.pdf Lecture Notes on Applied Parallel Computing
  29. http://www.zeit.de/1997/05/parallel.txt.19970124.xml/seite-3 Viel hilft viel: Die neuen Supercomputer haben Billigprozessoren wie der PC nebenan - aber zu Tausenden
  30. http://exhibition.ifdesign.de/winner_en.html?ma_id=11303 iF Online Exhibition - Via 4 Design
  31. Web site: Picture of cluster . https://web.archive.org/web/20060819064534/http://www.parallab.uib.no/resources/history/113_1335.JPG . 2006-08-19 . www.parallab.uib.no.
  32. http://www.geekdot.com/index.php?page=x-plorer x'plorer
  33. http://www.ubicc.org/files/pdf/Sulieman_PowerXplorer_UBICC_134_134.pdf Experimental Study on Time and Space Sharing on the PowerXplorer
  34. Web site: Современные системы фирмы Parsytec . 2024-11-23 . www.ccas.ru.
  35. http://www.classiccmp.org/transputer/software/oses/parix/stuff/CC_Hardware.pdf Parsytec CC Series (Hardware.Documentation), Rev. 1.2 (1996) Parsytec GmbH
  36. http://www.netlib.org/utk/papers/advanced-computers/parsytec.html The Parsytec CC series
  37. http://parallel.di.uoa.gr/BENCHMARKS/MACHINES/index.html Target Machines
  38. http://www.ssd.sscc.ru/PaCT/hardware/computer/#ParsytecPM Parallel Computing Hardware
  39. https://web.archive.org/web/20030509124439/http://www.csa.ru/CSA/tutor/parsa/epx.ps Embedded Parix Ver. 1.9.2, Software Documentation (1996)
  40. http://pinlab.hcuge.ch/pdf/Parallel_Computing_98.pdf Implementation of an Environment for Monte Carlo simulation of Fully 3-D Positron Tomography on a High-Performance Parallel Platform
  41. http://www.csa.ru/education/lib/Parsytec/Parsytec/powermouse.html System Parsytec Power Mouse in CSA
  42. http://www.computerwoche.de/heftarchiv/1997/38/1101338/ Parsytec liefert Baukasten für Parallelrechner
  43. https://docs.google.com/viewer?a=v&pid=sites&srcid=ZGVmYXVsdGRvbWFpbnxwYXJzeXRlY3RyYW5zcHV0ZXJ8Z3g6NmY4YTYxZGI0OGRlNmNjNQ PARIX Release 1.2 Software Documentation
  44. Web site: Seite nicht erreichbar - Universität Osnabrück . 2024-12-01 . www.informatik.uni-osnabrueck.de.
  45. http://www2.cs.uni-paderborn.de/fachbereich/AG/heiss/cosy/papers/TAT-94-Cosy-Bericht.pdf COSY – ein Betriebssystem für hochparallele Computer