Continue to Site

Welcome to our site!

Electro Tech is an online community (with over 170,000 members) who enjoy talking about and building electronic circuits, projects and gadgets. To participate you need to register. Registration is free. Click here to register now.

  • Welcome to our site! Electro Tech is an online community (with over 170,000 members) who enjoy talking about and building electronic circuits, projects and gadgets. To participate you need to register. Registration is free. Click here to register now.

hardware is still coded with machine/assembly language for efficiency, speed, etc.

Status
Not open for further replies.

PG1995

Active Member
Hi

It is said that coding the hardware using machine language can greatly enhance speed, efficiency, and such related parameters. I have been told even now when speed, efficiency, etc., are the concerns of importance, then coding of the hardware with assembly or machine language is preferred. I don't get it. Suppose you write a code using C++ to use with a microcontroller. You compile it using a compiler which would render the code into a hex code (machine code). You see you still end up with machine code. The same can be said about assembly language which also gets assembled into machine code but is a low level language. Why is so? Could you please help me with it? Thank you.

Regards
PG
 
Last edited:
PG,

A good assembler programmer can code much better than a good higher level language programmer. If you had an expert knowledge of both types of languages, and compared the two, you would see the difference. By better, I mean a smaller executable and faster execution. The big disadvantage with assembler is that you have to have an intimate knowledge of your CPU architecture. Another is that if you switch to another CPU, you have to translate the assembler instructions to the new CPU architecture. But with assembler, you have complete control of the machine down to the operation system level.

Ratch
 
A good assembler programmer can code much better than a good higher level language programmer

That is true, but whats also true is that A GOOD higher level language programmer will code much better than a BAD assembler programmer.
 
Last edited:
That is true, but whats also true is that A GOOD higher level language programmer will code much better than a BAD assembler programmer.

Also what is true is that today's compilers can typically generate much better code than most assembler programmers.

Today it seems one should think of assembler programming as a case of "premature optimization" for most needs. There will always be cases where high quality crafted assembler code for a particular application is needed, but for most development, a higher level language (such as C/C++) will generally be preferred simply because it is more widely known and is generally portable to many different architectures. This means that for a product, code can continue to be reused and maintained far longer and more easily than a specific set of assembler code. For most applications, this is more important than having the "fastest running code".

When developing an application, one should first code the system with ease of maintenance in mind, along with getting something running that meets the specification of what is needed (even if it runs slow). Once you have done this, you can then apply testing and benchmarking procedures to analyze and isolate the "slow" sections (usually inner loops), and then apply optimization strategies to those areas to effect and overall speedup in the system as a whole. Usually, what is needed for these areas will be a better algorithm, or perhaps an entire rethink/recode of how the system as a whole (this includes the human "soft" processes) operates. Occasionally, though, you may need to insert a bit of custom assembler as a block of inlined-code to effect the change needed. Such a code change though should be thought of as a "last resort", and if implemented, heavily documented and justified (as you will have likely broken the "portability" and "maintainability" of the system should another platform or development team be used).
 
I remember a software package that took two of us 8 months to write in a real time OS that used Fortran. there was one machine language instuction. MFPI or Move from Previous Instruction Space or MFDI (Data space) that had to be used for some reason.
 
Also what is true is that today's compilers can typically generate much better code than most assembler programmers.

Modern compilers rely on the computers been really fast- the code isn't generally that great.

If you disassemble PIC C code it's absolutely horrible :D

For those who remember the Amiga, it's OS was written in C - and there was a group who rewrote the OS routines in assembler, these where a tiny fraction of the size, and ran many times faster.
 
Nigel Goodwin,

Modern compilers rely on the computers been really fast- the code isn't generally that great.

Right on! And turning on the the optimizing option of a high level compilier usually means that the same executable will not work on another platform until it is tweaked again.

If you disassemble PIC C code it's absolutely horrible

The same is true for the Dog C---- that M/S puts out. It usually executes correctly, but its implementation is poor. They rely on gobs of memory and fast processors to blast past those shortcomings. They have been working on the O/S for years now, and they still don't have many of the features like comprehensive logging and checkpoint/restart, that I took for granted on a mainframe I worked with many, many years ago.

Ratch
 
My recent experience has been that you can write C/C++ code that compiles into relatively efficient machine code as well as C/C++ code that doesn't.

When optimizing C/C++ code for an Arduino, I've made drastic speed improvements by avoiding object orientation, stream operators, and the associated dynamic allocation and instantiation in favor of a traditional imperative programming model. The last project needed to execute the entire main loop every 5ms. With these "features" included as part of a library, it executed in 20-30ms. Without them it was only over 5ms here and there. Updating the relatively slow LCD display 6-8 characters per iteration instead of an entire line per iteration got me under 2.5ms.

The difference between good code and bad code has never really been apparent for anything I've done in Visual C# though. It runs, and although I'm sure there's a measureable difference, all the overhead associated with GUI's and event handlers and whatnot make it a moot point.
 
A friend of mine used to do firmware for a major telco's line of "cheap phones" whether it be IP based or whatever. His job was to "squeeze" all the features into a very small instruction space. We worked together some 25 years ago and he was an exceptional assembly language coder. I did much better writing algorithms and specifications and helping others to get over hurdles. We modified a commercial product's terminal handler to handle CRT terminal deletes. We had the sources, but it was still difficult. I came up with the how and he came up with the muscle.
In a project of mine using the COSMAC 1802 which I hand assembled all of the code for an application that I wrote, I spent most of the time creating the equlivelent of Gosub. The COSMAC did not have that function, but all of the registers could be either a GP register, a stack pointer or a program counter. Branches were done with "pass the program counter".

In college, way before the PC and about the time as the Intel 4004 and 8008 debuted, me and my lab partners built a tiny CPU out of SSI logic. It had 16 bit wide x 16 words long microcode and could sort 10 numbers in ascending or descending order depending on the microcode. One guy breadboarded the data structure, the other memory and I got such making the program counter and handling branches. It was fun.
 
Howdy, First, I'm biased: been writing Asm >30 yrs. In early uP days (uC not out yet), a 4 MHz clock was fast, so Asm was big for speed. Later, it was just my inertia.

For me, the biggest difference is compilers can't make any assumptions: any subroutine call Must save & reload registers on return and whatever is done in the sub must be saved before return. Whereas I can call with the registers set then return with results in the same registers. It's similar with interrupts; I can change register contents, by keeping in mind what & where it's used.

One trick I use for abnormal condition timeouts is to have the interrupt adjust the stack address to purge all the uC saved on the interrupt call. I can then directly jump to the condition handler. You can also use this in regular program flow, but I consider it bad practice. A compiler would never let you do this.
 
OlPhart,

I too am a long timer Assembler programmer. Yes, you can do a lot with assembly like adjusting stack pointers to speed things up as long as you return everything to what it should be when you are finished. You just don't have the control with high level languages that Assembler provides.

Ratch
 
I like many have been using assembler on and off for over 30 years and have a few observation gained over this period.
Anyone can write a quick C program to run on a PC. There are Gigs of RAM and sloppy programming is often obscured by fast processors.

Many C programmers (I have have a very good one working for me) think this can easily be adapted for an embedded system. WRONG. You cannot simply malloc(1M) on most embedded systems as you can on a PC. You need a detailed understanding of what's going on at the hardware level or at the coal face as we know it as.

My prodigal son ran into these problems on a recent job starting with void main(void). He had no comprehension of how the program got to main i.e reset vectors were alien to him. He declared all vars without considering the difference idata and xdata. He was using typically lazy high level stuff like multiplying variables by 16 which the compiler hadn't figured out that a shift operation would have been far faster.

Unless you are very unlucky, you are more likely to encounter bugs in a C compiler than in an assembler and sadly they don't teach assembler these days. Hence most students cannot drop down to the assembler code and work out what's going on. Of course you do need a a far more disciplined mind using assembler to ensure, for example that what goes on the stack in a call comes off at the end.

If I had my way, final year projects would not consist of adding up the ages of the children in your class (I kid you not, this was one question in a 2010 paper I came across), but would have them designing a 16x16 bit multiply routine in 16 bytes of stack...with no mult instructions. The art of reverse engineering divide routines to try and optimise them and understand how restoring and non restoring division works is dead.

Learning the nitty gritty of a micro can take a long time, often years on bigger devices. You simply cannot come along having 'done a bit of C' and expect to get the best out of a state of the art micro. I don't just mean the SHARK family of processors, but the type that use the 8051 core with unimaginable peripherals sat around the core like the Silabs family.

New compilers for new chips come out all the time. The techniques of using assembler or knowing how to force a compiler to do it for optimising is learned and is a far more tranferable skill.

Adobe Acrobat is 90Meg on my PC. As a basic document viewer, this is sloppy programming at it's best considering most of the routines are native to Windows DLLs.
 
Last edited:
Ahhh, kindred spirits respond... My axiom for Asm: upside, it allows you to do anything; downside, you're responsible for everything it does.

To WTP Peppers' point (one of); I've only had an assembler "lie" to me (an instruction not behave as described) a couple times. I've seen compilers act up, it seems mostly in library calls returning unexpected stuff given illtempered data. Maybe not that bad, but assembler is easier to trace weird behavior.
 
For example if you write for(i=0;i<10;i++) compiler will use memory location for i,
and every time i is considered it will talk to memory,
but much faster is to use register and do INC EAX, CMP EAX,EBX or TEST EAX,EBX.

Adobe Acrobat is 90Meg on my PC. As a basic document viewer, this is sloppy programming at it's best considering most of the routines are native to Windows DLLs.


Right on! I couldn't agree more.
 
I still code in asm but I need to get comfortable with the higher level languages as well, for libraries and suchlike.

My early assembler days were with the Commodore 64 6502 chip. I even sold proggies in assembler to Compute, remember the type in days?


Anyways just did an app that required assembler. Using a 16F886 that is running a vertical farming unit driven by a 5watt solar panel and 7Ah Gel cel. I did it in asm to lower the clock rate and reduce the energy usage. A HLL would need the higher speeds to keep up.

Since it handles it's own electronic fusing a decent prg response is required.
 
For example if you write for(i=0;i<10;i++) compiler will use memory location for i,
and every time i is considered it will talk to memory,
Not EVERY compiler will do that, just the really stupid ones. For example avr-gcc tries to use registers whenever possible. Having 32 registers, it usually pushes the old values on stack before entering the function, so it is not too inefficient. Also you can tie a variable to a register.
 
When the program reach to a complex level, it no longer possible be created by a person‘s mind with assembler language.
 
The advantage of producing your own assemby code is this:
You can create sub-routines that can be called at any time and as you produce more and more routines, you find that many of the previous can be called.
That's why you can get more hand-writen code into a chip that has only a small amount of memory.
In the old days, everything had to be written into 1 or 2k. To top it off, a 3-level chess game was written in 1k (or maybe it was 2k). (For the Sinclair Z80).
 
Hi,


Two things...plus one...

1.
Doing a floor() instruction in asm can be much faster than using the C floor() if you know your target platform.

2.
Asm is considered closer to the hardware than say C is. This means if you know Asm you can think closer to the hardware. This means Asm almost becomes part of the hardware itself, in that decisions for the design come from knowing both various kinds of hardware devices as well as asm techniques, so that the two 'meld' at some point.

For a good example, say you have to measure current and a resistor value of 10 ohms isnt too bad of a choice based on the current level. You can stick a 10 ohm 1/2 watt resistor in there and it works fine because you measure the voltage across the resistor with the AD converter on the uC chip, and do the division V/10 and you get your answer to what level the current flow is at the time. Done? Well not really. Knowing Asm you know that you can divide by 8 much faster than you can divide by 10 because dividing by 10 requires a more complicated algorithm whether you write it yourself or not. So knowing a little bit about Asm means you'll probably pick 8 ohms instead of 10 ohms and this allows division by 8 with three register shifts (right shifts) which is very very fast compared to having to do any real division.

The downside to Asm is that you have to put a lot more effort into managing your code even after you've written hundreds of routines that you use often. The fact is that Asm is closer to machine than C, and C is closer to human than machine, so Asm doesnt abstract as well as C does for the human way of doing things. So some things would be quite difficult to do in Asm rather than C.

When i first programmed anything it was in a language that wasnt even Asm yet. It was hard coding using switches to set the bits. When i first got into Asm i was overjoyed at the ease by which i could program things.

A language like C++ is nice because you can do object oriented programming which automatically means better code management.

3.
There's also the code maintenance aspect. Asm is much harder to maintain than say C. When something has to change or be fixed there could be lots to consider.
 
Last edited:
When the program reach to a complex level, it no longer possible be created by a person‘s mind with assembler language.


Every large assembler language project morphs into a high level language project as the complexity of program and data flow interaction force you to abstract application data flow from machine data flow. It's not instruction set complexity that creates the need for abstraction, it's data complexity. High level languages (in the assembler programmers head instead of a formal language) help reduce complexity back to just plain complicated.

https://en.wikipedia.org/wiki/Programming_complexity

Programming complexity (or software complexity) is a term that encompasses numerous properties of a piece of software, all of which affect internal interactions. According to several commentators, there is a distinction between the terms complex and complicated. Complicated implies being difficult to understand but with time and effort, ultimately knowable. Complex, on the other hand, describes the interactions between a number of entities. As the number of entities increases, the number of interactions between them would increase exponentially, and it would get to a point where it would be impossible to know and understand all of them. Similarly, higher levels of complexity in software increase the risk of unintentionally interfering with interactions and so increases the chance of introducing defects when making changes. In more extreme cases, it can make modifying the software virtually impossible. The idea of linking software complexity to the maintainability of the software has been explored extensively by Professor Manny Lehman, who developed his Laws of Software Evolution from his research. He and his co-Author Les Belady explored numerous possible Software Metrics in their oft cited book,[1] that could be used to measure the state of the software, eventually reaching the conclusion that the only practical solution would be to use one that uses deterministic complexity models.
 
Last edited:
Status
Not open for further replies.

Latest threads

Back
Top