Continue to Site

Welcome to our site!

Electro Tech is an online community (with over 170,000 members) who enjoy talking about and building electronic circuits, projects and gadgets. To participate you need to register. Registration is free. Click here to register now.

  • Welcome to our site! Electro Tech is an online community (with over 170,000 members) who enjoy talking about and building electronic circuits, projects and gadgets. To participate you need to register. Registration is free. Click here to register now.

A concept of a new PL for embedded applications

Status
Not open for further replies.
eblc1388 said:
I doubt they should.

Perhaps you right, they should not and that's why I propose new paradigm.
Nowadays assembelrs are turning 60 and I think its about time to invent a better bicycle.

eblc1388 said:
When one uses the name of a control bit to refer to it, and use the proper "include" file during assembly, why should the assembler complaints of any errors?

For example, in PIC16F690 ADFM bit is defined on ADCON0 register, while in PIC16F88 it is on ADCON1. Thus when changing the device I have to

1. Manually compare datasheets and find all features of this kind
2. Search the code and repalce

bsf ADCON0, ADFM

with something better.

Now guess how I have figured out this difference? :)
 
Nigel Goodwin said:
Probably not?, is there any reason you want me to?.

'cause I care what kind people think of me :)

Nigel Goodwin said:
BTW, what was the response from the compiler writers to this specific 'problem'?.

Summarized answer is: "Yes, we know about this problem, it is in our list but not of the top priority."
 
Hutorny said:
For example, in PIC16F690 ADFM bit is defined on ADCON0 register, while in PIC16F88 it is on ADCON1. Thus when changing the device I have to

1. Manually compare datasheets and find all features of this kind
2. Search the code and repalce

bsf ADCON0, ADFM

with something better.

Now guess how I have figured out this difference? :)

OK. I see your point.

But, if one changes to for another processor, one would expect registers will not in the same memory space or containing the same bit meaning, until it is confirmed otherwise. Therefore a complete match without user intervention/change is a gift and not something to be taken for granted.

Sorry I still think sorting this out is the responsibility of the programmer other than by the assembler.

Or if you wish, the problem can be tackled via some #define statements or macro in an include file of your own, which you can then associate the correct register with the control bit name.
 
eblc1388 said:
OK. I see your point.

But, if one changes to for another processor, one would expect registers will not in the same memory space or containing the same bit meaning, until it is confirmed otherwise. Therefore a complete match without user intervention/change is a gift and not something to be taken for granted.

Sorry I still think sorting this out is the responsibility of the programmer other than by the assembler.

Yes, sure. But I still think that the development tool (I am not saying assembler) could help with this.

eblc1388 said:
Or if you wish, the problem can be tackled via some #define statements or macro in an include file of your own, which you can then associate the correct register with the control bit name.

To be honest, I do not like #define and preprocessor. It does not make the code more readable and it is yet another source of hard-to-find errors.

Also, what these EQU and #define's do - they define a 'model' of the device and I think there could be a better language for this purpose.
 
Starting from scratch, to achieve incremental improvement, is either folly or hubris. I can't decide which. You're going to need better justification to generate any enthusiasm.

The problem with working on tools is that there is no economic incentive. In the beginning we create tools because there are none. Once they achieve a certain level of functionality we move on to the problems with an economic benefit.

People on this forum complain about the cost of tools. They are expensive because they are hard to develop, debug, support, and maintain. There are freeware tools out there but they are generally poor cousins to the commercial offerings.
 
Hutorny said:
And I have failed to implement interfaces because SDCC, PICC, and mikroPascal for PIC14 family do not generate proper code to work with function pointers.

Duh. You shouldn't use function pointers, it's near impossible to support them on PIC hardware.

PICs do not have a stack and use an Overlay methodology for reusing the same memory in different functions by evaluating the tree of what functions can be called from what places, and prohibiting recursive function calls.

Once you turn a function into a pointer, there is no longer any reliable way to decipher the possible calling tree (thus a lot of Overlay elements have to be made Static and the memory usage goes to hell) and protect against recursive calls.
 
Paul Obrien said:
This sounds a lot like JAVA ie not MCU specific etc

The tool language (e.g. evolution of macro) will not be MCU specific and could be similar to JAVA.
The devcie language (instruction set, registers, perifery) will be (and must be) device specific.
The devcie language will be defined it terms of the tool language and that's the core idea of the proposed language.
 
Oznog said:
Duh. You shouldn't use function pointers, it's near impossible to support them on PIC hardware.

PICs do not have a stack and use an Overlay methodology for reusing the same memory in different functions by evaluating the tree of what functions can be called from what places, and prohibiting recursive function calls.

Once you turn a function into a pointer, there is no longer any reliable way to decipher the possible calling tree (thus a lot of Overlay elements have to be made Static and the memory usage goes to hell) and protect against recursive calls.

I still can implement my design in assembelr, thus I think it is a compiler limitation.
 
Papabravo said:
Starting from scratch, to achieve incremental improvement, is either folly or hubris. I can't decide which.

Electronics is my hobby and some time I have no spare time for it.
Thus, when starting a project I cut it on rather short and isolated pieces which can be completed in short time and tested regardless of the others.

Papabravo said:
You're going to need better justification to generate any enthusiasm.
The problem with working on tools is that there is no economic incentive. In the beginning we create tools because there are none. Once they achieve a certain level of functionality we move on to the problems with an economic benefit.

People on this forum complain about the cost of tools. They are expensive because they are hard to develop, debug, support, and maintain. There are freeware tools out there but they are generally poor cousins to the commercial offerings.

Agree and altready started to work on better justification and exposing economic incentive.
 
Hutorny said:
I still can implement my design in assembelr, thus I think it is a compiler limitation.

But then you're writing in a completely different language, 100% processor specific - you're also working within the limitations of the hardware, a totally different situation to using a HLL.
 
Benefits of using e#

Papabravo said:
Starting from scratch, to achieve incremental improvement, is either folly or hubris. I can't decide which. You're going to need better justification to generate any enthusiasm.

The problem with working on tools is that there is no economic incentive. In the beginning we create tools because there are none. Once they achieve a certain level of functionality we move on to the problems with an economic benefit.

People on this forum complain about the cost of tools. They are expensive because they are hard to develop, debug, support, and maintain. There are freeware tools out there but they are generally poor cousins to the commercial offerings.

I envision the following benefits of using e#

  1. It will allow writing better quality code in shorter time
  2. It will reduce maintenance cost by providing abilities for regression testing (on a simulator or in-circuit debugger)
  3. It will reduce impact of migrating to another device/platform
Elaboration on these benefits can be found in this post
 
Nigel Goodwin said:
Personally I'm extremely doubtful about such claims?, but in any case have you started writing such a compiler?.

Not yet. I made few experiments to prove myself that the mission is possible and that's it so far.

The first four steps in my roadmap are:
1. Refine vision
2. Establish requirements
3. Design a device metamodel
4. Make key design decisions on the language, define a protolanguage

And what I am really missing is deep knoweldge of other MCUs.
 
Hutorny said:
I envision the following benefits of using e#

  1. It will allow writing better quality code in shorter time
  2. It will reduce maintenance cost by providing abilities for regression testing (on a simulator or in-circuit debugger)
  3. It will reduce impact of migrating to another device/platform
Elaboration on these benefits can be found in this post
These are common starting points for every compiler design since the first FORTRAN and COBOL compilers from the late fifties and early sixties. Nothing new here and I'm with Nigel on the dubious nature of the claims. I hope you can prove me wrong but there is an awful lot of experience and knowledge wrapped up in our current tools. These are not stupid or incompetent people, they are giants in this field. The nature of the task is an epic undertaking.
 
Papabravo said:
These are common starting points for every compiler design since the first FORTRAN and COBOL compilers from the late fifties and early sixties. Nothing new here and I'm with Nigel on the dubious nature of the claims. I hope you can prove me wrong but there is an awful lot of experience and knowledge wrapped up in our current tools. These are not stupid or incompetent people, they are giants in this field. The nature of the task is an epic undertaking.

My believe is that defining MCU model in the same language as the program is written will open up possibilities hardly achievable in the current tools.
 
Hutorny said:
My believe is that defining MCU model in the same language as the program is written will open up possibilities hardly achievable in the current tools.

No disrespect, but that looks like 'marketing speak'?, and has no meaning in the real world.

As I see it, what you're wanting to do is write a language that's fully functional on any processor it's applied to? - so you can take a program from any processor and transfer it to any other, with no changes or limitations.

It's a nice idea, and it's perfectly possible, BUT it means crippling the language to the limitations of ALL the processors - this would make it a completely useless language, and rather than adding the parts you think current languages lack, it would do the complete opposite and simply remove them all together.

In any case, you don't appear to have even started such a compiler, nor even given any real thought to it? - and, to be honest, as you seem to want a language that completely 'spoon feeds you', I'm doubtful that you could write a compiler anyway?.
 
Nigel Goodwin said:
No disrespect, but that looks like 'marketing speak'?, and has no meaning in the real world.

As I see it, what you're wanting to do is write a language that's fully functional on any processor it's applied to? - so you can take a program from any processor and transfer it to any other, with no changes or limitations.

Not quite correct. If your program is written for a specific MCU and uses MCU features not available in other MCUs - you would have to make changes.
However, many parts in an embedded application (such as state machine, math, conversion) can be written for a 'more generic MCU' and these parts can be transfered to other MCU that fit under the same 'generality umbrella'

Nigel Goodwin said:
It's a nice idea, and it's perfectly possible, BUT it means crippling the language to the limitations of ALL the processors - this would make it a completely useless language, and rather than adding the parts you think current languages lack, it would do the complete opposite and simply remove them all together.

My idea is to give developers freedom to make such choise - either writing with 'limitations of ALL the processors' and have highly portable code or wrining for a given MCU and have efficient code.

The language itself will not bear any information on MCU, insead it will give a tool to define an MCU (some MCU or any MCU depenging how good the metamodel will be).

Nigel Goodwin said:
In any case, you don't appear to have even started such a compiler, nor even given any real thought to it? - and, to be honest, as you seem to want a language that completely 'spoon feeds you', I'm doubtful that you could write a compiler anyway?.

IMHO, writing compiler is not that great challange comparing to the desinging the language in the way it solves the tasks and remains clear and easy for undestanding and using. And this is the place were I need feedback from other developers. This kind of discussion straightens up my mind and motivates me finding better explanations to my idea.

Once langauge designed and its BNF established - writing compiler will be purely technic tasks. Years ago I've done it for some small sort-of-a-language data exchange formats.
 
Hutorny said:
Not quite correct. If your program is written for a specific MCU and uses MCU features not available in other MCUs - you would have to make changes.

You mean just like existing languages? - but I thought the whole reasoning behind this was because you weren't prepared to do so?.
 
Status
Not open for further replies.

Latest threads

New Articles From Microcontroller Tips

Back
Top