We use cookies to enhance your experience on our website. By continuing to use our website, you are agreeing to our use of cookies. You can change your cookie settings at any time. Find out more

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide.

Print Price: $224.99

Format:
Hardback
576 pp.
381 line illus., 236 mm x 198 mm

ISBN-13:
9780195154559

Copyright Year:
2005

Imprint: OUP US


Computer Architecture

From Microprocessors to Supercomputers

Behrooz Parhami

Series : The Oxford Series in Electrical and Computer Engineering

Computer Architecture: From Microprocessors to Supercomputers provides a comprehensive introduction to this thriving and exciting field. Emphasizing both underlying theory and actual designs, the book covers a wide array of topics and links computer architecture to other subfields of computing. The material is presented in lecture-sized chapters that make it easy for students to understand the relationships between various topics and to see the "big picture." The short chapters also allow instructors to order topics in the course as they like.

The text is divided into seven parts, each containing four chapters. Part I provides context and reviews prerequisite topics including digital computer technology and computer system performance. Part II discusses instruction-set architecture. The next two parts cover the central processing unit. Part III describes the structure of arithmetic/logic units and Part IV is devoted to data path and control circuits. Part V deals with the memory system. Part VI covers input/output and interfacing topics and Part VII introduces advanced architectures.

Computer Architecture: From Microprocessors to Supercomputers is designed for introductory courses and is suitable for students majoring in electrical engineering, computer science, or computer engineering.

* An Instructor's Manual (0-19-522213-X) and CD with PowerPoint® presentations (0-19-522219-9) are available to adopters.

* Visit the companion website at: http://www.ece.ucsb.edu/Faculty/Parhami/text_comp_arch.htm

Readership : Introductory courses in computer architecture, suitable for students majoring in electrical engineering, computer science, or computer engineering

Preface
PART 1: BACKGROUND AND MOTIVATION
1. Combinational Digital Circuits
1.1.. Signals, Logic Operators, and Gates
1.2.. Boolean Functions and Expressions
1.3.. Designing Gate Networks
1.4.. Useful Combinational Parts
1.5.. Programmable Combinational Parts
1.6.. Timing and Circuit Considerations
2. Digital Circuits with Memory
2.1.. Latches, Flip-Flops, and Registers
2.2.. Finite-State Machines
2.3.. Designing Sequential Circuits
2.4.. Useful Sequential Parts
2.5.. Programmable Sequential Parts
2.6.. Clocks and Timing of Events
3. Computer System Technology
3.1.. From Components to Applications
3.2.. Computer Systems and Their Parts
3.3.. Generations of Progress
3.4.. Processor and Memory Technologies
3.5.. Peripherals, I/O, and Communications
3.6.. Software Systems and Applications
4. Computer Performance
4.1.. Cost, Performance, and Cost/Performance
4.2.. Defining Computer Performance
4.3.. Performance Enhancement and Amdahl's Law
4.4.. Performance Measurement vs.
4.5.. Reporting Computer Performance
4.6.. The Quest for Higher Performance
PART 2: INSTRUCTION-SET ARCHITECTURE
5. Instructions and Addressing
5.1.. Abstract View of Hardware
5.2.. Instruction Formats
5.3.. Simple Arithmetic and Logic Instructions
5.4.. Load and Store Instructions
5.5.. Jump and Branch Instructions
5.6.. Addressing Modes
6. Procedures and Data
6.1.. Simple Procedure Calls
6.2.. Using the Stack for Data Storage
6.3.. Parameters and Results
6.4.. Data Types
6.5.. Arrays and Pointers
6.6.. Additional Instructions
7. Assembly Language Programs
7.1.. Machine and Assembly Languages
7.2.. Assembler Directives
7.3.. Pseudoinstructions
7.4.. Macroinstructions
7.5.. Linking and Loading
7.6.. Running Assembler Programs
8. Instruction-Set Variations
8.1.. Complex Instructions
8.2.. Alternative Addressing Modes
8.3.. Variations in Instruction Formats
8.4.. Instruction Set Design and Evolution
8.5.. The RISC/CISC Dichotomy
8.6.. Where to Draw the Line
PART 3: THE ARITHMETIC/LOGIC UNIT
9. Number Representation
9.1.. Positional Number Systems
9.2.. Digit Sets and Encodings
9.3.. Number-Radix Conversion
9.4.. Signed Integers
9.5.. Fixed-Point Numbers
9.6.. Floating-Point Numbers
10. Adders and Simple ALUs
10.1.. Simple Adders
10.2.. Carry Propagation Networks
10.3.. Counting and Incrementation
10.4.. Design of Fast Adders
10.5.. Logic and Shift Operations
10.6.. Multifunction ALUs
11. Multipliers and Dividers
11.1.. Shift-Add Multiplication
11.2.. Hardware Multipliers
11.3.. Programmed Multiplication
11.4.. Shift-Subtract Division
11.5.. Hardware Dividers
11.6.. Programmed Division
12. Floating-Point Arithmetic
12.1.. Rounding Modes
12.2.. Special Values and Exceptions
12.3.. Floating-Point Addition
12.4.. Other Floating-Point Operations
12.5.. Floating-Point Instructions
12.6.. Result Precision and Errors
PART 4: DATA PATH AND CONTROL
13. Instruction Execution Steps
13.1.. A Small Set of Instructions
13.2.. The Instruction Execution Unit
13.3.. A Single-Cycle Data Path
13.4.. Branching and Jumping
13.5.. Deriving the Control Signals
13.6.. Performance of the Single-Cycle Design
14. Control Unit Synthesis
14.1.. A Multicycle Implementation
14.2.. Clock Cycle and Control Signals
14.3.. The Control State Machine
14.4.. Performance of the Multicycle Design
14.5.. Microprogramming
14.6.. Dealing with Exceptions
15. Pipelined Data Paths
15.1.. Pipelining Concepts
15.2.. Pipeline Stalls or Bubbles
15.3.. Pipeline Timing and Performance
15.4.. Pipelined Data Path Design
15.5.. Pipelined Control
15.6.. Optimal Pipelining
16. Pipeline Performance Limits
16.1.. Data Dependencies and Hazards
16.2.. Data Forwarding
16.3.. Pipeline Branch Hazards
16.4.. Branch Prediction
16.5.. Advanced Pipelining
16.6.. Exceptions in a Pipeline
PART 5: MEMORY SYSTEM DESIGN
17. Main Memory Concepts
17.1.. Memory Structure and SRAM
17.2.. DRAM and Refresh Cycles
17.3.. Hitting the Memory Wall
17.4.. Pipelined and Interleaved Memory
17.5.. Nonvolatile Memory
17.6.. The Need for a Memory Hierarchy
18. Cache Memory Organization
18.1.. The Need for a Cache
18.2.. What Makes a Cache Work?
18.3.. Direct-Mapped Cache
18.4.. Set-Associative Cache
18.5.. Cache and Main Memory
18.6.. Improving Cache Performance
19. Mass Memory Concepts
19.1.. Disk Memory Basics
19.2.. Organizing Data on Disk
19.3.. Disk Performance
19.4.. Disk Caching
19.5.. Disk Arrays and RAID
19.6.. Other Types of Mass Memory
20. Virtual Memory and Paging
20.1.. The Need for Virtual Memory
20.2.. Address Translation in Virtual Memory
20.3.. Translation Lookaside Buffer
20.4.. Page Replacement Policies
20.5.. Main and Mass Memories
20.6.. Improving Virtual Memory Performance
PART 6: INPUT/OUTPUT AND INTERFACING
21. Input/Output Devices
21.1.. Input/Output Devices and Controllers
21.2.. Keyboard and Mouse
21.3.. Visual Display Units
21.4.. Hard-Copy Input/Output Devices
21.5.. Other Input/Output Devices
21.6.. Networking of Input/Output Devices
22. Input/Output Programming
22.1.. I/O Performance and Benchmarks
22.2.. Input/Output Addressing
22.3.. Scheduled I/O: Polling
22.4.. Demand-Based I/O: Interrupts
22.5.. I/O Data Transfer and DMA
22.6.. Improving I/O Performance
23. Buses, Links, and Interfacing
23.1.. Intra- and Intersystem Links
23.2.. Buses and Their Appeal
23.3.. Bus Communication Protocols
23.4.. Bus Arbitration and Performance
23.5.. Basics of Interfacing
23.6.. Interfacing Standards
24. Context Switching and Interrupts
24.1.. System Calls for I/O
24.2.. Interrupts, Exceptions, and Traps
24.3.. Simple Interrupt Handling
24.4.. Nested Interrupts
24.5.. Types of Context Switching
24.6.. Threads and Multithreading
PART 7: ADVANCED ARCHITECTURES
25. Road to Higher Performance
25.1.. Past and Current Performance Trends
25.2.. Performance-Driven ISA Extensions
25.3.. Instruction-Level Parallelism
25.4.. Speculation and Value Prediction
25.5.. Special-Purpose Hardware Accelerators
25.6.. Vector, Array, and Parallel Processing
26. Vector and Array Processing
26.1.. Operations on Vectors
26.2.. Vector Processor Implementation
26.3.. Vector Processor Performance
26.4.. Shared-Control Systems
26.5.. Array Processor Implementation
26.6.. Array Processor Performance
27. Shared-Memory Multiprocessing
27.1.. Centralized Shared Memory
27.2.. Multiple Caches and Cache Coherence
27.3.. Implementing Symmetric Multiprocessors
27.4.. Distributed Shared Memory
27.5.. Directories to Guide Data Access
27.6.. Implementing Asymmetric Multiprocessors
28. Distributed Multicomputing
28.1.. Communication by Message Passing
28.2.. Interconnection Networks
28.3.. Message Composition and Routing
28.4.. Building and Using Multicomputers
28.5.. Network-Based Distributed Computing
28.6.. Grid Computing and Beyond
Index

There are no Instructor/Student Resources available at this time.

Behrooz Parhami is Professor of Computer Engineering at the University of California, Santa Barbara. He has written several textbooks, including Computer Arithmetic (OUP, 2000), and more than 200 research papers. He is a fellow of both the Institute of Electrical and Electronics Engineers (IEEE) and the British Computer Society (BCS). He is a member of the Association for Computing Machinery (ACM), and a distinguished member of the Informatics Society of Iran, for which he served as a founding member and the first president.

Making Sense in Engineering and the Technical Sciences - Margot Northey and Judi Jewinski

Please check back for the special features of this book.