Continue to Site

Welcome to our site!

Electro Tech is an online community (with over 170,000 members) who enjoy talking about and building electronic circuits, projects and gadgets. To participate you need to register. Registration is free. Click here to register now.

  • Welcome to our site! Electro Tech is an online community (with over 170,000 members) who enjoy talking about and building electronic circuits, projects and gadgets. To participate you need to register. Registration is free. Click here to register now.

Discrete computer.

Status
Not open for further replies.

()blivion

Active Member
Hello my glorious electronic allies... :)

I have had the thought in my head for a long time now of the logistics in designing and building a computer by hand using discrete parts, with emphasis on speed.

A few things that are good to think about when entertaining the idea...

What is a good target speed? (Think world records.)
What is the theoretical upper limit for discrete computers?
What architecture would be best? Harvard, or Von-numan? How many bits?
What are the most important physical constraints?
Does power usage matter? Does cost matter?
Are we talking fastest clock speed... or data throughput?
Where should one draw the line for being "discretely built" ???
What would be the best ways to simulate it?
What kind of interfaces would be cool to have attached to it?
What kind of transistors? What logic family do we want to model?

I'm probably way off the mark, just pondering here... but I was thinking a target speed should be 1 Ghz at least, which should be (impossible) doable using some of the multi GHz RF transistors.

For architecture, I like the idea of a differential asynchronous ECL RISC design with continuous SRAM memory (Von-numan) and a low bit width arch, probably like 4-8 bit. This should use the lowest number of gates and have the highest obtainable speeds.

I don't think power usage should be a major consideration, I personally wouldn't mind making a beastly power bill destroyer, so long as it was also wicked fast. It could use 1000+ watts and still cost nothing to operate if it is only used sparingly. Also, using power to defeat RC propagation delay will reduce slowdowns that are typical of non-integrated designs. That amount of power is easily providable with multiple power supplies, so it's not like it will be hard to do either.

I'm not sure about cost. On the one hand, I can't spare $20 to by a few parts for a project I have been meaning to do for a year or so now. On the other hand, if it looks significantly serious enough of an effort that it could break world records, people might become interested enough to pitch in $100-$500 at a time... then $10,000 doesn't seem too far fetched a budget, though that's pretty much counting the eggs before they hatch. Ideally, limitless budget. I'm more interested in connecting the technological dots.

For construction, I was thinking with my ridiculous idea of a good target speed, we couldn't help but use high-end surface mount microwave board, something like alumina and microstrip design techniques (sat TV LNB PCB, for example) The different circuit blocks would obviously need to be radio shielded (...about every mm) and it would need a solid ground plane and lots of decoupling. It pretty much has to be a double sided board, with one side the component side, the other a solid ground plane. If it was possible to do a solid power plane, maybe that could be coated with dielectric, then mated to copper or steal sheet as the ground plane, also doubling as the heatsink. Via's and pins could then be brought from the solderable ground plate through the power plane to the component side. But this could easily be the other way around too if that had some advantages. I'm pretty sure microstrop needs to have a very precise ground plane depth to work right.

One of the things I'm hung up on a bit is the gates themselves. There is supposed to be a way to turn the first most basic OR ECL logic gate into all the other logic gates, I just don't know how. That is something that I am probably going to need tutored on. The internet is surprisingly void of info on that particular subject. Or maybe I just don't know the right search engine string to use. When that is done though, I would make a bunch of logic gate blocks that could be tiled together. It would be really easy to print and to test as a "cell" or "grid" pattern, and it would be easier to screen if everything fits a grid, mass production foil press. I'm not quite sure how they could be designed to interconnect. A hex pattern would be cool. The only problem is you CAN'T have unterminated wires in microstrip designs. Hell, even a stub located near an active trace can cause serious problems. In any case, I think a copy/paste friendly grid if at all possible would be the best way to go.


Anyway, as a whole, it's a curious little project that keeps popping into my mind. So I figured I would get it out in the open. And I'm sure my ideas are flawed, I fully expect harsh criticism and nay saying. It's not very "mainstream" kind of project, nor are my ideas. But I'm not really worried about either, go nuts.
 
Why and what for?
 
For the fun of it?
 
I think the problem becomes wire length. The old CRAY's had that problem.
 
Agree at ronv, from what I know propagation delay is the biggest limiting factor for macro scale computer systems. And it's even becoming a serious problem for integrated circuits.

Logic interconnect length creates two things, resistance, and capacitance. Together, this creates an RC time constant which of course delays our signal. The ways to reduce this problem are to use thicker, wider, shorter wiring to reduce resistance, and/or to reduce trace proximity to metal objects. Unfortunately, this also means keeping away from the ground plane. Doing this unfortunately ruins the idea of a microstrip design.

Another idea, which is perfectly combinable with the above, is to use a low voltage differential design. If all logic signals are differential, it is more immune to noise, and thus the voltages that are considered valid logic levels can be much closer to each other. This means "the capacitor" won't have to charge as much to reach a valid logic state, and propagation delay is reduced. This is one of the useful properties of low voltage deferential signaling that have made it the high-speed interconnect of choice.

Making it an asynchronous (no clock) design also helps deal with propagation delay in an odd way. In a clocked logic system, each logical unit pipelines data to the next, on each clock tick. With asynchronous, a signal applied to an input simply cascades through all the logic gates, till it reaches some "check point" destination. In stead of clocks, it is gatted with logic "interlocks" and/or have very carefully sculpted input timings, to prevent errors. The advantage is that you don't need to distribute a clock to every single part, and the system will run "as fast as it possibly can" in theory. The disadvantage is that... asynchronous logic is chaotic and complex, almost impossible. Race conditions and such can ruin your circuit. The thing is, most of the problems of asynchronous logic have to do with propagation delay and arrival times of various inputs. But with the proposed board design, propagation delays are something that is going to have to be fine tuned and well understood anyway, by the nature of microstrip technique. So if that was something that would already have to be done in the first place, why not make use of interesting logic that works well with it, and get more advantages?

It sounds contradictory, but circular reasoning is perfectly valid in this particular case. Use an async design, because we already have to worry about propagation delay with microstrip designs, and async designs have issues revolving around propagation delay too. At least that's my theory.
 
Last edited:
Use a FPGA. High speed. Millions of gates with out a soldering iron. Fast SRAM. The delay is only one inch.
 
nsaspook said:
You could take a simple version of the SAP-1 machine from Malvino's book, Digital Computer Electronics and convert the TTL design to ECL to get a idea of the complexity of ECL design rules. It's sounds like a cool thing to do if you don't value your sanity.

https://www.google.com/search?q=on+s...w=1680&bih=840

Cool reference. I have a few whitepapers like that too. All of them seem to talk about the termination, and none get into actual ECL gate design. Anyway yeah, as far as I can tell, the only real hurdle with ECL/PECL vs other techniques is the line termination. It's more the realm of RF impedance matching than it is digital logic. But when you are pushing 1Ghz, you need to deal with termination anyway, regardless of the family you are using. Bare PCB will get reflections at greater than 1Ghz.

Here is another small whitepaper that's kinda interesting. It shows a schematic of a ECL frequency divider. The transition frequency for the transistors in that are 200Ghz, and the device works up to 100Ghz. The best BJT RF transistors I can find are ~80Ghz. So, PCB and macroscopic considerations aside (their project is an IC die) 40Ghz is about what is possible for a single logic gate, based solely off T[SUB]f[/SUB].

misterT said:
Interesting hobby project. If you are really going for it, I suggest you get a copy of this book: High Speed Digital Design: A Handbook of Black Magic
EDIT: There is a "sequel" to that book also: High Speed Signal Propagation: Advanced Black Magic

I have only read the first one.. good stuff.

I'm glad you think such a project is interesting too. When my brother first discovered relays, he was saying he wanted to build a computer out of them. He was a little upset when I informed him that the Russians had beat him to it quite a few years ago. :)

Those books look great. Though I would love to get them, I usually don't buy research materials. I have the internet, which should have all the same information just a search engine query away. Plus... I have no money :(

tcmtech said:
Why and what for?

ronsimpson said:
Use a FPGA. High speed. Millions of gates with out a soldering iron. Fast SRAM. The delay is only one inch.

The point of the project would not be to do what is most efficient or easiest, but rather to do it simply for the sport. The whole point is to solder the individual transistors, resistors, capacitors together into a working discretely built computer. It's a hobby project, it doesn't have to solve any particular problem or be useful. Though, admittedly, I too am more of a "it needs to do something" kind of person. Still, I do like the idea of building a computer by hand out of the simplest parts. Oddly satisfying. If I could easily make the parts myself, I probably would do that too.
 
.

Making it an asynchronous (no clock) design also helps deal with propagation delay in an odd way. In a clocked logic system, each logical unit pipelines data to the next, on each clock tick. With asynchronous, a signal applied to an input simply cascades through all the logic gates, till it reaches some "check point" destination. In stead of clocks, it is gatted with logic "interlocks" and/or have very carefully sculpted input timings, to prevent errors. The advantage is that you don't need to distribute a clock to every single part, and the system will run "as fast as it possibly can" in theory. The disadvantage is that... asynchronous logic is chaotic and complex, almost impossible. Race conditions and such can ruin your circuit. The thing is, most of the problems of asynchronous logic have to do with propagation delay and arrival times of various inputs. But with the proposed board design, propagation delays are something that is going to have to be fine tuned and well understood anyway, by the nature of microstrip technique. So if that was something that would already have to be done in the first place, why not make use of interesting logic that works well with it, and get more advantages?

It sounds contradictory, but circular reasoning is perfectly valid in this particular case. Use an async design, because we already have to worry about propagation delay with microstrip designs, and async designs have issues revolving around propagation delay too. At least that's my theory.

OK, no clock. So, who is going to say "now" to all players every time it is needed? I cannot grasp that.
 
OK, no clock. So, who is going to say "now" to all players every time it is needed? I cannot grasp that.

In a nut shell, it works like nerve impulses. If signal is applied to the inputs, it goes from one "player" to the next to the next to the next... so on. There is usually some kind of interlock that makes it so any "functional blocks" can't receive conflicting inputs, or more inputs when it's already working on something. Another way of putting it is "self clocking". You can read more basic information in the wiki article for asynchronous circuits.

If you're a bit skeptical, do note that many async CPU's have already been built before. It's not something new.
 
So basically you want a super fast computer that doesn't really do anything useful and you want others to help design and pay for it? :confused:

Sorry but Bill Gates beat you too it a few times over. :p
 
()b:

Back in college, before PC's and about the time of the 4004 and 8008 era, me and two other lab partners did just that. We made a computer that had a 16 bit instruction set and 16 words. It was essentially a microcoded instruction set. It could sort 16 numbers in ascending or descending order depending on the microcode. One guy did the data structure, the other memory and me the Program counter/branching (the harder part). I learned why the program counter points to the "NEXT INSTRUCTION", the hard way.

It was a lot of fun.
 
Your description in #11 almost sounds like LabView which uses Data-Flow programming. Only when all of the inputs are present can that module execute.

Cool, because parallel processes are easy. Not cool, because software can cause race conditions. Error handling is different because of the lack of "exceptions".
 
So basically you want a super fast computer that doesn't really do anything useful and you want others to help design and pay for it?

Sorry but Bill Gates beat you too it a few times over.

Don't really know how to respond to this... hope you're just joking around LOL. If not, all I can say is that it not about getting something for nothing, it's about "the doing" that is entertaining. If such a project were actually happening, I would only expect people that wanted to help for the sport of it to get involved. And of course, I would be just as involved in designing it as the next guy. I wouldn't be expecting other people to do the work just for my personal gain. Point of fact, I would be perfectly content if someone else were to read this thread, decide to build it themselves, and have ME help THEM with the design and ideas, for their gain.

Finally, keep in mind that although we are talking about >1Ghz computer, that would only be superficial. The structure would be so narrow that the actual program throughput would not be even remotely comparable to a modern desktop PC. So it would not be a "fast" personal computer by any standard. But quite fast for being built with individual parts, by hand.


@KISS

Yeah, the idea of rolling your own computer is really neat IMO, something I eventually want to try out (QED, this thread). I have only seen a few serious projects of the sort, most of them were disappointing because they rely on standard CMOS/TTL chips. **broken link removed** that was built as a hobby project, and I don't even think it's complete yet. He is using hand built DTL.

I looked at LabView, and you're spot on, that is almost exactly how I envision async logic would work. Using something like flip-flops at all the inputs works as the interlock in this case. In the idle state, the inputs are neither logic 0 or logic 1. When an input is fed signal, it will be one of the two states. When all of the inputs are at a valid logical state, the logic that owns said inputs fires, processes the input, then passes the result to it's outputs for further logical blocks to process.

One of the really neat things about async logic, is it will run as fast as it actually can by nature. Do something to change the propagation delay, like change the temperature, and suddenly you lose or gain program throughput. I really like that, it's totally alien to what we are used to with normal desktop PC's.

I'm not guna lie though, async logic is almost certainly a PITA to design with. So it all comes down to, will it be worth it? For a simple, highly linear computer system, I think so.
 
It's only hard to design with when all you have is a hammer (classical procedural programming) and see everything as a nail (execute with exceptions).

The last LabVIEW I got to work with was a lot better with error handling, but it should have been built-in from the start. "Race conditions" are not expected with software. The real Buty of the language is how easy it is to write parallel processes without thinking.
 
see if you can get the hardware manuals for a DEC PDP-10 (or was it the PDP-8?). IIRC that was a discrete transistor minicomputer, and DEC included the hardware manuals along with the hardware, so the owner didn't need to keep a bunch of DEC techs on staff (unlike IBM, where you had IBM employees as the only ones allowed to repair or maintain the hardware).
 
That had to be the PDP-8 which I remember well. I also saw hardware manuals for some of the PDP-11's. The PDP-10 is probably more known as the DEC-10. The PDP-11 was definitely TTL. The family lineage/geneology is quite unique.
 
Looks like even the PDP-8 uses TTL chips, though that doesn't matter as the gate level schematics can still be inferred from such information. I found the schematic for the PDP-8 systems **broken link removed**. Can't find any hardware manuals on PDP-10/DEC-10, (though while I was searching I did happen to find a site that is vulnerable to trivial SQL injection... not really my business though, so I don't really care. :/)

I don't know what would be good as far as higher-level arch, I'm still wondering about the lower level stuff as of now. For high level, it should be something simple, with a small collection of only the most useful/necessary instructions.


Moving on...


As I see it, the main question that needs to be answered for such a project to move forward is...
"What is the minimum propagation delay one can get with a discretely built logic gate of any construct?"

This question breaks down into...
"What is the lowest capacitance(s)/resistance(s) one can get in a discrete logic gates critical areas?"

If the answers are in the range of femto farads and tens of ohms, then Ghz gates are possible by my estimations. Though I could be mistaken.
 
Status
Not open for further replies.

New Articles From Microcontroller Tips

Back
Top