Continue to Site

Welcome to our site!

Electro Tech is an online community (with over 170,000 members) who enjoy talking about and building electronic circuits, projects and gadgets. To participate you need to register. Registration is free. Click here to register now.

  • Welcome to our site! Electro Tech is an online community (with over 170,000 members) who enjoy talking about and building electronic circuits, projects and gadgets. To participate you need to register. Registration is free. Click here to register now.

Discrete computer.

Status
Not open for further replies.
Microcoding makes life a bit easier. Microcode has very wide word widths, like 256 bits. The PDP-11/45 /50 and 55 were microcoded machines.

So, when an instruction gets decoded, it then is broken down into a set of microcoded instructions. Microcode is probably hard to explain, but I could try. Heck, I can probably find my lab notebook and scan the pages if your really interested.
 
I understand the concept of microcode well enough I think. It's how each instruction is carried out at the gate/electronics level. Or put another way, if programs are "books", subroutines are "pages", lines of code are "sentences", and single instructions are "the letters" then microcode would roughly be "the font/typeface"... so to speak.

Which brings me to another point...

A lot of people mistakenly think that if a CPU "runs at 3Ghz" clock speed, then that means this is how fast a single instruction is executed. Now for highly pipelined systems, that happens to be true. But in reality, the instruction rate is determined differently. The core clock speed determines when each stage of the CPU does some part of an instruction. Such as load, store, decode, so on and so forth. A single instruction is actually composed of several of these smaller steps. And yes, you can follow this recursive rabbit hole right down to the transistor and voltage level.

Clock vs instruction rate is the main reason that AMD had to start using the PR system again for their ~2001 CPU's (Google "Mhz Myth"). These chips did run at slower clocks, yes. But larger parts of each instruction were getting finished per clock, meaning one instruction would take less clock cycles to finish. So a similar number of instructions per second were still being executed when compared to their higher clocked competitors. Anyway I digress.

I'm not worried about the font/typeface just yet, I'm still on about which ink would be best. :)
 
Remember RISC and CISC architectures.

Microcode: Not really. Take the 1802 processor which had what I would call a "Pass the program counter". A microcoded instructions might be:
1. Get the program counter
2. put it into register X
3. Return

That might be encoded into 5 bits of the microcode and three locations. The general idea is that those 5 bits always do the same thing. So, the microcode twiddles the data structures directly.

So, in our sorter, we had say two regiesters.
One bit may have said load into register A
Another bit may have said load into register B
another bit may have done a compare A>B
Another bit may have been swap A & B
Another may have been Branch if A>B
Another branch if A<B
Another might have been halt.

Every microcoded line executed at once.

So, consider the computer as a bunch of data structures and the microcode as the switches to operate each data structure.
 
so, a collection of bitwise instructions executed in parallel
 
Sorry I went silent all of the sudden, lost power. Dang wind storm took out the lines for a while.

Microcode discussion

I don't really know myself, I still figure my definition is pretty close. And the first three pages of Google search for "microcode meaning" seem to agree with me. But the exact definition of microcode aside, one needs to figure out the logic gates first before any such things could even be implemented.

minicomputers discussion

You guys have the right idea here, hit the nail right on the head actually. This was the vision I was thinking would be the end result for such a project, only far faster/better.

In fact, it would be cool if such a attempt was simply done directly in one of these old minicomputer cases, gutted of it's original boards course. Those old systems have pretty much the perfect layout for doing exactly what is proposed in this thread. I would even bet that the PSU's of such an old system would be perfect, while not using a single IC. So yeah, A+ for this idea, I'm all for it.



Anyway, just to rehash my general overview of such a system, and how it would come together...

Step 1: Decide on a technology for the logic gates.
I like the idea of high-end RF heterojunction bipolar transistor's in an ECL configuration for each gates inner workings. In theory, greater than 1Ghz core clock is easily possible this way because ECL can readily obtain sub nanosecond propagation delay, though word is still out on whether or not this is actually doable on a macroscopic scale. This aspect really needs the attention of a microwave engineer to define it's plausibility then do some experimenting, before such a project could continue on.

Step 2: Design logic gate and basic PCB artwork.
After step 1 is figured out, then each of the different logic gates needs basic PCB artwork made, artwork which works well at the target speed. The artwork needs to be designed with microstrip-like theory's and techniques, as reflections and proper termination are going to be of the utmost importance at this scale. Also, much care needs to be given to reducing capacitance and resistance, to improve propagation delay as much as possible.

Step 3: Design advanced logic gate architecture.
After the internals and specifics of each gate are taken care of, each gate as a whole should be designed to fit into it's neighbors in a grid like pattern such that one can just copy/paste a gates artwork to any location on a grid to ultimately create combinational logic functions. Effectively, we would want to be able to treat each gate as a whole single building block, rather than fussing over it as a collection of parts each time. One would need to know what gate inputs, outputs, and power are going to look like before this stage could be worked on. This could arguably be VERY difficult.

Step 4: Design each major computational unit.
Each computational unit would need to be divided onto a single PCB, such that one board does one function for the whole computer. For example, the ALU would probably be one board, and the instruction decode logic on another. Each unit would have connections on it's edges, some fine gauge shielded connectors for passing data to and from the rest of the system, and heavy gauge wire/connections for power. These would be implemented as "cards" where each card would be able to be built, tested, inserted, removed, and all around dealt with separately from the rest of the system. Modularity makes for easy design, diagnosis, and repair.

Step 5: Design the computers architecture.
This is where stuff like microcode, RISC/CISC, Harvard/von neumann, buss width and so forth would be discussed. Ideally, the simplest structure that reuses the most functional units for other purposes and requires the least interconnecting would be the most practical design. However, a lot of performance and usability will be determined here. If it was intended to implement/emulate a particular language, or if we want to have a FPU or not, this would be the best place to optimism these things. This is arguably where the most fun of the whole project is.

The rest is physical construction, cable routing, cooling, power, and so on and so forth.


Conclusion.
Anyway, I'm not really seeing the kind of interest in the project that would be needed for it to really take off and be serious, so sadly, the idea is probably dead. This is fine though, It's not as if I was gung-ho cowboy enthusiastic about doing this, it's just something that has been on my mind for a while. I was wondering if there was anyone else out there with similar thoughts and ideas, so I decided I would come out with what I had just to see what happened. So here are some notes to take away from this.

1: LNB's can readily process frequency's > 1Ghz using discrete transistors and no IC's. This should be just as applicable to discrete built logic gates. But this needs to be confirmed by and expert.

2: Termination and avoiding reflections are almost certainly top priority design problems for such a system. This, and the above mean a microwave circuit engineer would probably be the VIP for such a project.

3: There exists a lot of literature on exactly how to implement ECL gates and how they work, but not a whole lot on how to realize each different gate type with ECL. (OR/NOR can supposedly be used to make any gate type).

4: Old minicomputer cabinets and hardware are probably the ideal project enclosure for a hand built discrete computer. If you can get your hands one one, you would have a great start.
 
I need to point out this:

http://en.wikipedia.org/wiki/Grace_Hopper said:
She visited a large fraction of Digital's engineering facilities, where she generally received a standing ovation at the conclusion of her remarks. During many of her lectures, she illustrated a nanosecond using salvaged obsolete Bell System 25 pair telephone cable, cut it to 11.8 inch (30 cm) lengths, the distance that light travels in one nanosecond, and handed out the individual wires to her listeners. Although no longer a serving officer, she always wore her Navy full dress uniform to these lectures.
 
Ah Grace Hopper, such an amazing person.

Yeah, with that quote you highlight a most interesting point. Excluding quantum entanglement, the max possible speed that information can travel is the speed of light. And a real circuit is probably going to be many percentage slower than this. So to make a discrete computer that could run at >1Ghz it would at the very best conditions have to be no more than 30cm cubed. And probably much less. That or magic...

But then the concept of pipelining, plus the idea of data throughput vs clock rate comes into play. Which brings up the question of what really is "speed"?

The gates themselves, (a single gate that is), will be relatively small, maybe 3cm square at max? With that in mind, the speed of light will not impede, so nanosecond gates are not relativistically impossible. But a gate is not "the computer" so it's switching speed is not necessarily the speed. In order for gate speed to actually be the computers speed, the question of "Can we pipeline to that level?", needs to be answered. If we can figure that out, the system really could run at a throughput of >1Ghz, in bucket brigade fashion, even if the data travels a length that is longer than 30cm. That's why I was looking at async logic. You can reuse a logic gate right after the data has left it's output in theory. But it's not so easy if it's clocked.

So... I was thinking of the system more in terms of "impulses" traveling through "switch tracks" on their way from one memory at the input, to another at the output, or something along those lines. In this way, the barriers of the absolute limit to the speed based on the distance and the speed of light would be rendered less meaningful to the system performance. Both the program throughput and the "switching speed" would still be >1Ghz, so long as the gates themselves could do it and signals were fed in quickly, right behind each other.

However, this *IS* probably highly convenient thinking.

The idea that you can simply shove data into some gates inputs at a rate faster than the propagation delay, (or the speed of light), implies that something is doing things that fast already. But the system is a self contained loop, or stream. So there shouldn't be some magic external super fast data supply that would be able to do this. Also, there is such a thing as data dependence. In a large pipeline, one can't always pump program/data in the one end and expect it to always process out correctly on the other. Sometimes the next immediate calculation relies on the answer of the one in front of it, such as a conditional jump. And I don't think one would have a transistor budget to implement branch prediction for such an implied "simple" computer.

So yeah, I don't deny that there are a lot of unknown dangers lurking. The whole idea could well be impossible if things like "c" can't be worked around carefully.
 
3: There exists a lot of literature on exactly how to implement ECL gates and how they work, but not a whole lot on how to realize each different gate type with ECL. (OR/NOR can supposedly be used to make any gate type).

ECL data books and "cook books" from about 1965-1975 will have this information, as this was the period of time when ECL transitioned from discrete circuits to ICs, and even the IC manufacturers were including the internal circuits of many of the ICs in the data books. the real "proprietary" information in those days wasn't in the schematic, but in the silicon masks and layout.
 
Status
Not open for further replies.

Latest threads

New Articles From Microcontroller Tips

Back
Top