Talk:Harvard architecture

From Wikipedia, the free encyclopedia

will you elaborate the harvard architectur, how it goes and why is this architecture is contrast to von neumann architecture

The best way to understand this is to read both articles. However notice that BOTH (at the simplest conceptual level) operate on very similar fetch-execute cycles currently only explained in the von Neumann architecture article, the main difference being where/how they each get the instructions and data from. Most modern computers are actually a mixture of the two architectures. -- RTC 22:49 15 Jul 2003 (UTC)

I would assume that the Z3 used a harvard architecture since it's instructions were read off of tape? I don't know if this is worth adding to the main text, and I won't add it myself as I'm not 100% sure.

Isn't this more a variant of the neumann architecture than a completely new architecture? The only difference is that the harvard model can read and write at the same time from memory. While the neumann model is still the basis of this model. I see the most important parts of the neumann model in the way this architecture handels the difference between memory and the 'working brain' the cpu. Compare this to the neural network model, where no clear distinction between memory and cpu can be made.

Im not that well versed in different forms of computer architecture, but i'm reading some information about neural networks, could someone with more architecture knowledge elaborate? --Soyweiser 30 Jan 2005

Actually the Harvard architecture is older than the von Neumann architecture not newer. When first introduced, the von Neumann offered one major advantage over the Harvard: it "stored program" in the the same memory as data - making a program just another kind of data that programs could load/manipulate/output. The original Harvard architecture machines used physically separate memorys, usually with different and incompatible technologies, making it impossible for programs to access the program memory as data. This difference made von Neumann machines much easier to program and quickly resulted in the decline of Harvard machines. It was only much later (with the introduction of instruction caches or in special applications like digital signal processing) where the issue of speed reintroduced Harvard architecture features. -- RTC 07:42, 31 Jan 2005 (UTC)
Yes, you're right -- the only difference between Harvard architecture vs. Princeton architecture is "data and code all mixed up in one memory" vs. "data over here, programs over there in that completely distinct memory" [1]. So ... what's the difference between Princeton architecture vs. von Neumann architecture, if any? --DavidCary 02:17, 14 December 2005 (UTC)

The article doesn't explain the origins or timeframe of the architecture, hence the confusion about before/after Von Neumann architecture. I can't add this myself since this is the information I'm looking for!82.18.196.197 11:12, 14 January 2007 (UTC)

In the introductory paragraph it states the origin:
The term originated from the Harvard Mark I relay-based computer, which stored instructions on punched tape (24 bits wide) and data in electro-mechanical counters (23 digits wide).
Click on the link for the timeframe in which that computer was made. -- RTC 04:03, 15 January 2007 (UTC)

[edit] Article Quality and Expansions

I think this a good, well-written article. I do think however it could be a little better laid-out. Some of the comments on here are pretty valid and they should perhaps be incorporated into the articles. --Gantlord 13:25, 15 September 2005 (UTC)

Thank you for your suggestion regarding [[: regarding [[:{{{1}}}]]]]! When you feel an article needs improvement, please feel free to make whatever changes you feel are needed. Wikipedia is a wiki, so anyone can edit almost any article by simply following the Edit this page link at the top. You don't even need to log in! (Although there are some reasons why you might like to…) The Wikipedia community encourages you to be bold. Don't worry too much about making honest mistakes—they're likely to be found and corrected quickly. If you're not sure how editing works, check out how to edit a page, or use the sandbox to try out your editing skills. New contributors are always welcome.

[edit] relation between architecture and instruction set.

Is there a relation between the instruction set (RISC and CISC) and the architecture of the processor/microcontroller? I find Harvard architecture processors with RISC instruction set and Von neumann architecture processors with CISC instruction set (I'm not sure if this is always true). Is there a reason relating the architecture to the instruction set?

No, there is no relationship (e.g., RISC = Harvard / CISC = von Neumann). You can build either of them either way. The PowerPC, SPARC, and ARM are RISC and von Neumann. Seymour Cray's machines were RISC (although the term had not yet been invented when he designed most of them) and von Neumann. I don't believe the Harvard Mark I was RISC as it had some rather complex instructions (e.g., interpolate value from function tape) and it was clearly Harvard.
The issue gets complex with modern machines that use cache. Both the Pentium and PowerPC are considered to be von Neumann, from the point of view of the programmer, but run internally using instruction and data caches as Harvard. The Pentium is CISC and the PowerPC is RISC. To make it even more complex, current Pentium implementations "translate" the CISC instructions from the instruction cache into RISC like microinstructions, before actually executing them. -- RTC 04:25, 5 August 2006 (UTC)