Harvard architecture books

Status
Not open for further replies.

none

New Member
Hi,
Can you suggest one or more books about harvard architecture. I'am looking for a book that goes in deep.
I don't understand the differences between Harvard and Von Neumman architecture except for separate memories for "data and instructions and two sets of address/data buses between CPU and memory". I always read that statement , but I'd like to know more details.
There are many books talks about "Computer organization and architecture" but I don't know if that books will help me to understand the harvard.
for example Does exist MBR and MAR registers on harvard ?I'm bit confused
Can you point me to right way ?

Thank so much
 
I'm pretty sure those are just general differences and implementation details are all up to the designer. Are you sure you aren't taking the classification too literally and assuming there's a lot of well-defined, detailed structural differences between the two?
 
Hi
Doesn't exist well-defined differences between the two ?If It doesn't , maybe that is my problem
The thought of learn about this architecture is "born" When I started to read AVR 's datasheet and a book about Computer organization and architecture
 
I don't think so because there is also modified harvard which is not as strict in memory separation as harvard.
 


Why is it so important to you to understand the deep intricacies? What kind of work do you plan?
I am asking because there are few applications that require such a deep understanding of the micro inner workings. Most people are satisfied with learning one, two or three micro families and figuring out how to make the micro sing the song you need it to sing.
 
That's it really..... Von Neumann.. Is like the Motorola 68xx CPU's... One huge page of address, all I/O's are an address in general space.. ( there are more examples, but I can't think of any!! I think PowerPC was Von Neumann and possibly the Rockwell 6502 )

Harvard and modified Harvard is everything else... Intel, pic ARM, All have two address spaces and proper I/O ports..

As a programmer, you wouldn't give a diddly squat!! as both will do the same to a certain degree.. Von Neumann has an issue with data swapping as the busses are commoned!! I used to use the Phillips XA that was a kind of Hy-brid..
 
I have no plan , it's just for learn.
I have got this e-book "Computer Architecture A Quantitative Approach" and it's awesome

example :
The type of internal storage in a processor is the most basic differentiation, so in
this section we will focus on the alternatives for this portion of the architecture.
The major choices are a stack, an accumulator, or a set of registers. Operands
may be named explicitly or implicitly: The operands in a stack architecture
are implicitly on the top of the stack, and in an accumulator architecture
one operand is implicitly the accumulator. The general-purpose register architectures
have only explicit operands—either registers or memory locations. Figure 2.1 shows a
block diagram of such architectures and Figure 2.2 shows how the code sequence
C=A+B
would typically appear in these three classes of instruction sets. The ex-
plicit operands may be accessed directly from memory or may need to be first
loaded into temporary storage, depending on the class of architecture and choice
of specific instruction.
As the figures show, there are really two classes of register computers. One
class can access memory as part of any instruction, called
register-memory archi-tecture, and the other can access memory only with load and store instructions,
called load-store or register-register
architecture. A third class, not found in computers shipping today, keeps all operands in memory and is called a
memory-memory architecture. Some instruction set architectures have more registers than
a single accumulator, but place restrictions on uses of these special registers.
Such an architecture is sometimes called an extended accumulator
or special- purpose register computer.
Although most early computers used stack or accumulator-style architectures,
virtually every new architecture designed after 1980 uses a load-store register ar-
chitecture. The major reasons for the emergence of general-purpose register
(GPR) computers are twofold. First, registers—like other forms of storage inter-
nal to the processor—are faster than memory. Second, registers are more efficient
for a compiler to use than other forms of internal storage. For example, on a reg-
ister computer the expression
(A*B)–(B*C)–(A*D)
may be evaluated by doing the multiplications in any order, which may be more efficient because of the loca-
tion of the operands or because of pipelining concerns (see Chapter 3). Neverthe-
less, on a stack computer the hardware must evaluate the expression in only one
order, since operands are hidden on the stack, and it may have to load an operand
multiple times.
More importantly, registers can be used to hold variables. When variables are
allocated to registers, the memory traffic reduces, the program speeds up (since
registers are faster than memory), and the code density improves (since a register
can be named with fewer bits than can a memory location).
As explained in section 2.11, compiler writers would prefer that all registers
be equivalent and unreserved. Older computers compromise this desire by dedi-
cating registers to special uses, effectively decreasing the number of general-pur-
pose registers. If the number of truly general-purpose registers is too small, trying
to allocate variables to registers will not be profitable. Instead, the compiler will
reserve all the uncommitted registers for use in expression evaluation. The domi-
nance of hand-optimized code in the DSP community has lead to DSPs with
many special-purpose registers and few general-purpose registers.
How many registers are sufficient? The answer, of course, depends on the ef-
fectiveness of the compiler. Most compilers reserve some registers for expression
evaluation, use some for parameter passing, and allow the remainder to be allo-
cated to hold variables. Just as people tend to be bigger than their parents, new in-
struction set architectures tend to have more registers than their ancestors.
Two major instruction set characteristics divide GPR architectures. Both char-
acteristics concern the nature of operands for a typical arithmetic or logical in-
struction (ALU instruction). The first concerns whether an ALU instruction has
two or three operands. In the three-operand format, the instruction contains one re-
sult operand and two source operands. In the two-operand format, one of the oper-
ands is both a source and a result for the operation. The second distinction among
GPR architectures concerns how many of the operands may be memory addresses
in ALU instructions. The number of memory operands supported by a typical
ALU instruction may vary from none to three. Figure 2.3 shows combinations of
these two attributes with examples of computers. Although there are seven possi-

Thanks
 
I did have a Beeb. But my first encounter was with the RCA 1802 Cmos chip!! I think it was basically the same as the 6502... The board we had contained several 1852 for general I/O all connected by N0,N1 and N2 A special multiplex system for all the support chips... We still have many systems out in the world... Cannot get parts now..
 
Status
Not open for further replies.
Cookies are required to use this site. You must accept them to continue using the site. Learn more…