Skip to Content
Algorithms and Parallel Computing
book

Algorithms and Parallel Computing

by Fayez Gebali
April 2011
Intermediate to advanced
364 pages
10h 8m
English
Wiley
Content preview from Algorithms and Parallel Computing

2.6 VERY LONG INSTRUCTION WORD (VLIW) PROCESSORS

This technique is considered fine-grain parallelism since the algorithm is now parallelized at the instruction level, which is the finest level of detail one could hope to divide an algorithm into. A VLIW implies that several instructions or opcodes are sent to the CPU to be executed simultaneously. Picking the instructions to be issued in one VLIW word is done by the compiler. The compiler must ensure that there is no dependency between the instructions in a VLIW word and that the hardware can support executing all the issued instructions [20]. This presents a potential advantage over instruction pipelining since instruction scheduling is done before the code is actually run.

Figure 2.16 illustrates a processor that uses VLIW to control the operation of two datapath units. Figure 2.16a shows the schematic of the processor where the VLIW contains two instructions. Each instruction is used to control a datapath unit. Figure 2.16b shows the content of the VLIW word at different processing cycles. The figure is based on the ones presented in References 18 and 24. Each row represents a VLIW word issue. The vertical axis represents the machine cycles. A gray box indicates an instruction within the VLIW word and an empty box indicates a no-op. A no-op instruction is used when the compiler is unable to resolve the dependency among the instructions or datapath availability.

Figure 2.16 A VLIW word containing two instructions to independently ...

Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month,
and much more.

Read now

Unlock full access

More than 5,000 organizations count on O’Reilly

AirBnbBlueOriginElectronic ArtsHomeDepotNasdaqRakutenTata Consultancy Services

QuotationMarkO’Reilly covers everything we've got, with content to help us build a world-class technology community, upgrade the capabilities and competencies of our teams, and improve overall team performance as well as their engagement.
Julian F.
Head of Cybersecurity
QuotationMarkI wanted to learn C and C++, but it didn't click for me until I picked up an O'Reilly book. When I went on the O’Reilly platform, I was astonished to find all the books there, plus live events and sandboxes so you could play around with the technology.
Addison B.
Field Engineer
QuotationMarkI’ve been on the O’Reilly platform for more than eight years. I use a couple of learning platforms, but I'm on O'Reilly more than anybody else. When you're there, you start learning. I'm never disappointed.
Amir M.
Data Platform Tech Lead
QuotationMarkI'm always learning. So when I got on to O'Reilly, I was like a kid in a candy store. There are playlists. There are answers. There's on-demand training. It's worth its weight in gold, in terms of what it allows me to do.
Mark W.
Embedded Software Engineer

You might also like

Multicore and GPU Programming

Multicore and GPU Programming

Gerassimos Barlas
Graph Algorithms

Graph Algorithms

Mark Needham, Amy E. Hodler
Parallel Computer Architecture

Parallel Computer Architecture

David Culler, Jaswinder Pal Singh, Anoop Gupta

Publisher Resources

ISBN: 9780470934630Purchase book