Continue to Site

Welcome to our site!

Electro Tech is an online community (with over 170,000 members) who enjoy talking about and building electronic circuits, projects and gadgets. To participate you need to register. Registration is free. Click here to register now.

  • Welcome to our site! Electro Tech is an online community (with over 170,000 members) who enjoy talking about and building electronic circuits, projects and gadgets. To participate you need to register. Registration is free. Click here to register now.

Optimization and Resource Policy Debate

Status
Not open for further replies.

Papabravo

Well-Known Member
Nigel and I have exchanged a handful of posts, in another thread, on our views of optimization and resource allocation. I think this is an important topic for discussion, and I think the community will benefit from the frank and fair exchange of views. Rather than hijack the original thread I suggest we continue the discussion here.

First let me stipulate that this discussion is not about winners and losers. IMHO Nigel is an extremely competent and dedicated individual. In short he is a man worthy of our respect.

In embedded design there are a number of common metrics that we use to compare competing solutions. This applies to both hardware and firmware. A non-exhaustive short list would include gate count, package count, board area, bytes of code space, bytes of data space, number of instruction cycles, clock frequency, and so forth.

The start of this discussion involved an attempt by a poster to save a couple of bytes writing data to an LCD with an 8051. I made a comment, which was critical of the time involved to save what I considered a trivial number of bytes before he had a solution that worked.

In a somewhat over the top riposte, Nigel rose to the defense of optimizers and super coders everywhere dedicated to the absolute and total elimination of bloatware.

I replied that in an earlier time I would have agreed with him because our options for dealing with the problems of not enough time and not enough memory were restricted. I argued that today there are many options for dealing with the problem of running out of resources, and therfore there is less of an imperative to spend inordinate amounts of time doing manual optimization.

The crux of my argument is that silicon resources are cheap and according to Moore's law are getting cheaper, and at an accelerating rate. People, with salaries, benefits, and expenses are getting more expensive, also at an accelerating rate. At some point the imperatives of our lives and our business will dictate the following priorities:

First, make it work
Second, make it small or fast
Third, make it elegant

To respond to Nigel's last point in the previous thread. In 1978 when the 8051 was introduced it had 2K of code space. If you ran out of space there were not many choices that maintained the single chip nature of the design. Later this was expanded to 4K and 8K. Today you can buy 8051's from numerous manufacturers that have a full 64K of code space, in packages a fraction of the size of a 40-pin DIP gunboat. Most of our companies products use 8051's and we have trouble filling even 40% of the available code space. There are no doubt problems which cannot be solved in a midrange PIC16Fxxx with even 8K words of program memory. Do we really imagine that Microchip has no alternatives for us?

As a final point in this post I would like to add the following idea, which did not originate with me.

There is no substitute for the right algorithm

What this means to me is, that if you pick the wrong algorithm you can optimize to your hearts content and never achieve the result from throwing your first attempt away and choosing a superior algorithm. The classic example is to sort a five records or 1000 records. In the first case a bubble sort is superior to a quicksort. In the second case the reverse is true.

That's my opinion -- yours can and probably will vary. I really want to know what you all think.



 
Wow, that was a long post.

There was a time as well for me, when I would optimize code down to it's most efficient/elegant greatness, but now I can't be bothered all that often. It still makes me semi-sick when I have a Windows program that does relatively nothing and it's 100MB's.

Specificly for uC's, I would rather buy a 8k uC for $2.14 and program it without spending a few more hours on code space optimization then a $2.00 4k uC and spend those extra hours. For production and reducing production costs, it would probably be a different matter, but here it's mostly hobby work. Even for limited production runs for specialized products I would rather have that extra hardware to work with incase of revisions or firmware/hardware expansion in the future.
 
Papabravo said:
There is no substitute for the right algorithm

What this means to me is, that if you pick the wrong algorithm you can optimize to your hearts content and never achieve the result from throwing your first attempt away and choosing a superior algorithm. The classic example is to sort a five records or 1000 records. In the first case a bubble sort is superior to a quicksort. In the second case the reverse is true.

As you say, discussion, NOT agrument :lol:

OK, I consider that statement too sweeping, and your example tends to prove that. You gave two examples, both of which are 'better' for that particular circumstance - generally in a sorting routine you would only implement ONE type of sort, so you have to comprimise. It would be uncommon to choose a bubble sort, as it's only effective for very specific examples, there are meny superior general purpose sorting routines, of which the quick sort is only one example.

Certainly though I can envisage some situations where using the right algorithm makes a massive difference, but I think sort routines wasn't a good choice?.

So how about?:

There is no substitute for the right algorithm, under some circumstances

:lol: :lol:
 
Hello,

Papabravo, I have read your posts in Microcontrollers section, this are my thoughts:

I think optimalization is important, but not to the extremes.
Let's say we have 1000 16F628 PICs already bought and ready to receive a program, and let's say our program is using 51% of it's ROM. I wouldn't optimize it, so that it could fit into 16F627.

But if my program could run on 18Fs after optimalization, but needs DsPIC in it's current form, I would definitelly try to optimize it.

To sum up:
Saving a byte or two is not worth it, but if you could save a lot (say a 5$ per part) then we should give it a try. I don't think Hobby projects should be optimalized at all, it's just not worth it.

I am sorry if anybody cosiders my thoughts wrong.

PS: I really admire those super coders, some of them can optimize the code remarkably (squeezing every cycle/byte out of MCU). I consider this as a nice sport/excercise.
 
Jay,

You make an excellent point. Super coding is not a skill that one can acquire and apply quickly. It literally takes years to develop the skills necessary to succeed at it. I do not mean to imply that the effort to acquire better coding skills is wasted only that in practice it is important to recognize that there is a point of rapidly diminishing returns, and to be mindful of where that point is.

All,

If my sorting example was not germaine to the point I was trying to make; can anyone think of an example that demonstrates that trying to optimize a bad algorithm simply because you don't know that there is a superior alternative algorithm, is in fact a complete waste of time?
 
Papabravo said:
If my sorting example was not germaine to the point I was trying to make; can anyone think of an example that demonstrates that trying to optimize a bad algorithm simply because you don't know that there is a superior alternative algorithm, is in fact a complete waste of time?

I've been trying to think of one, but can't? - to be honest it sounds the sort of thing a teacher might say?.

BTW, love the thread title! :lol:
 
Ok

How about this. In several different applications, the use of a cyclic redundancy check is a requirement. The usual theoretical development is in the context of a feedback shift register. It is not well understood that you can do the computation eight bits at a time saving a good deal of instruction execution time at the expense of the space required for a lookup table.

Second example. Is it better to use multiply and divide library subroutines on a processor that cannot do those operations in hardware, or is it better to find an alternative, like lookup tables, or optimizing the library routines, or writing your own routines.

It comes down to priorities and what is considered valuable.
 
Papabravo said:
It comes down to priorities and what is considered valuable.

But both examples prove your original premise (or perhaps not yours!) as false - using different methods provide 'better' results, but in different ways. Which is why I think it's not true, as it's too broad and sweeping, I quite understand the idea behind it! - but semantically it fails.
 
Hello,

Let me first say what a great read, this thread, and the one where it started, were! Thanks! I learn a lot, listening to giants talk, as well as standing on shoulders. :)

For an example, how about 'trying to optimize a floating point routine, where a 'close enough to work' integer one could get it?'

I think Jay's point was pretty good:
Saving a byte or two is not worth it, but if you could save a lot (say a 5$ per part) then we should give it a try. I don't think Hobby projects should be optimalized at all, it's just not worth it.
While I agree, I do think there is a real art to making something precise and efficient, and while in 'industry,' there may not be time or expense for it, a person who is only using a project to improve his mind can afford to do it.
At the same time, the difference in price for a better equipped micro on a hobby project is really not worth sweating over. This kind of project would likely only produce a few children.
It seems to me that creativity flows from a lack, rather than an abundance.
See, you guys have my mind oscillating. :lol:
Thanks again, for making me think so much.
Regards,
Robert
 
I am curious to know if there are any folks out there who have run up against a hard limit, like code space, RAM space, to much execution time and so on, with their microcontroller of choice. Did you see it coming? What alternatives were available to relieve the situation? What did you ultimately end up doing?

This has not happened to me since about 1981 when we had 1K of code space on the 8048/8041.
 
The start of this discussion involved an attempt by a poster to save a couple of bytes writing data to an LCD with an 8051.
That was me, and I think I actually did save a couple dozen bytes.
I just loaded the data into several ram locations like this:

Code:
mov 30h,<character code>
mov 31h,<character code>
...
mov xxh,<character code>
ljmp <subroutine start address>
the 30h, and 31h, etc refer to the internal memory locations.
Basically, I am feeding the memory with the data to be sent to the LCD, and having it terminated with ascii code 0.

My subroutine is something like this:
Code:
mov R0,30h
mov P1,@R0
inc R0
clr P3.7
mov A,FFh
dec A
jnz FD   <-- go to the above instruction if A <> 0.
setb P3.7
CJNE @R0,0,<subroutine start address>

Basically, it reads whatever is stored in ram, processes it, and EN is low for 255 machine clocks.

even though there may be ROM space, I still think code optimization is very important, because if you make a very complex application and you don't even think about compressing your code, you will be requiring tons, and tons of ROM.
 
Papabravo said:
I am curious to know if there are any folks out there who have run up against a hard limit, like code space, RAM space, to much execution time and so on, with their microcontroller of choice.
I always size the hardware to whatever the task is, but that being said, I've never had anything go over 64k ROM space, which is the max my uC comes in. It really does take quite a bit of programming to fill up 64k without using large data tables or anything. The only limit I've had is processing power. I've found my 20Mhz 8bit uC is just not fast enough to do all the calculations needed in the timing needed to keep the accuracy I want for one of my projects.

Of course now, I just ordered a AT91SAM7 ARM7 controller with 256k of ROM space, 64k of RAM and 55Mhz core speed. That should do for a little bit.

It's very much a issue as well, of code expanding to fill any open space available. With 256k, I'm thinking on implementing FAT32 routines for storing data on a storage card. Maybe a TCP/IP stack and a MAC/PHY with Ethernet connection. All for datalogging.

This ARM7 does look really cool. Hopefully I can get it up and running as easily as I hope (hopefully faster than Mstechca's 8051 project). If so, I'll probably write a beginners article on it.
 
You'll find the ARM7 an interesting diversion. You will be especially thrilled with the complexity of mapping the internal peripherals to the external pins. Have a blast.
 
Haha! Great thanks. Just trying to make up a board for the 64pin QPN package looks like it's going to be a challenge.

Trying to figure out a place to put the 555 timer to drive the LCD is proving to be difficult as well.
 
Status
Not open for further replies.

Latest threads

New Articles From Microcontroller Tips

Back
Top