Today is...
Wednesday, July 26, 2017
Welcome to the Modbus Community, about
the world's leading automation protocol.
Linux PLC: ISA Article and a Charter of sorts(long)
I wish to propose some ground rules with the reasoning behind them. ...
By Curt Wuollet on 7 January, 2000 - 9:10 am

Hi Ken & all

Ken:
Got a chance to look at the article on the ISA site. Great description of the reasons for a free open source controller and open source advocacy.
One little thing though, the currency at the open source bazaar is recognition. I won't bitch about WOT not being mentioned if you and all for that matter, keep clearly in mind that this is a community project. The thing that will get people to code and contribute is to recognize those actions. Yours is a major contribution at this critical stage, for that we all thank you.

All:
We must develop a sense of community and mutual appreciation. That, not the simple fact of Open Source, is the essence of making this thing work. We all need to check our commercialism at the door. The community on the Automation List is very diverse and very partisan. We have not degenerated into flame wars yet although I have heard some discouraging words. That's not aproblem
as I don't expect those folks will be contributing. For those who want the project to succeed, please try to see others reasoning and where we can converge not diverge. I, for my part, struggle with my ego to plot a course
that will be maximally inclusive. This is very difficult as there are already many diverse opinions on how to proceed. I have no mutually agreed upon authority to do this, but, I did start the project and am willing to try. I wish to propose some ground rules with the reasoning behind them. These may be thrown out if you can develop a larger consensus than me. Indeed, I can be thrown out if someone can please a larger group of actual contributors. These will be carefully limited to what it will take to get a start and a reference implementation. If we can't achieve some sort of consensus, the advantage of having many developers is lost. If there's
something here you simply can't live with, post a constructive alternative. If you see more wisdom or a better way to serve the whole community,
post it. But, please do try to consider what's best for all.

License: GPL

Reasons: So it's owned by everyone equally and can't be subverted to exclude anyone. Including everyone excluding no one.

OS: Linux.

Reasons: It's the only truly open platform and the GPL guarantees that.
Anyone can get it and the tools it provides which are fully adequate for the task. It provides the libraries and tools to do the job
with less reinventing of the wheel. It provides for the use of remote HMI's and distributing tasks out of the box. Done in Linux, you will not need to buy add ons or tools or licenses. Revision Control will be possible as will
keeping a bunch of users and developers in sync. It can be scaled to the job and deployed without regard for legalities or restrictions. No one can force a rewrite or upgrade. With care, code can be reasonably portable. It already runs on more platforms from uCsimms to SBC's to Enterprise class servers to Beowulf supercomputers. It runs on almost every processor within reason and a few more than that. It has RTOS, embedded, and full versions with an agreed upon and consistent interface, Elix. Advanced networking is designed in. Tons of programs and functions with source code are available for integration and study. Last, but not least, we have the source code so the applications we write can fully coordinate with and exploit the OS features. The project will be officially agnostic on ports to other platforms _after_ there's a system to port.
Including everyone excluding no one.

My personal opinion? Porting to a proprietary OS defeats the entire purpose and will make the the project a part of the problem rather than a solution. Flame away.

Language: C

It is the native language of the platform. All the languages that I've heard anyone mention can be and are interfaced to C. It is the most
universal and is understood by more people than any other. It is more portable across platforms and compilers exist on even the smallest.
There is absolutely no way that we won't end up doing parts of this in C anyway. If the core system is done in C, any language can be used for clients and modular components. We will need C performance and possibly inline assembler in places. Many things on Linux require C interfaces. Bonehead C is preferred whenever possible, so boneheads like me can still contribute.


Including everyone excluding no one.

Architecture: Modular with flexible interfaces.

To allow cooperative development and utmost flexibility and accommodate as many different ideas and features as possible. To allow everything on one machine or distributed functions. Examples: Sockets allow PLC and HMI on
the same machine or across the world. Shared memory I/O map allows access from user code, RTLinux, modular hardware drivers or compiled kernel drivers. TCP/IP is central as it is ubiquitous and universal. I am working on a shared memory spec which I need for the Modbus I/O driver I'm working on now. This is about as flexible as it can be as new structures can be
added to accommodate drivers as they are added. We need a small working core that's, "As simple as possible but no simpler" to use as a research tool and as proof of concept. Just a working PLC with the tools to make it usable. If we don't do this first, I'm afraid the project will die. Once
this is complete, we can add or replace anything with a framework to test it in.

Including everyone excluding no one.

I/O:

This is a problem, By this time next year, I predict almost everyone will support Ethernet. Unfortunately, they will do everything in their power to support it in a proprietary, non-interoperable fashion. Example Controlnet.
I feel the best course is to emphasize TCP/IP as there will be profibus on TCP/IP, devicenet on TCP/IP, etc. There are boards to interface with
almost all existing fieldbus systems. The boards are for the most part expensive and will remain so because organizations like ODVA and Profibus
etal. want it that way. Of course, we can use dumb boards and do the protocols in software but this has it's limits. I chose Modbus and
Modbus/TCP for starters because they use the peripherals that everyone already has and I can get the information to do it and release the result under the GPL license. Most of the others are the real problems to be solved in this industry if interoperability is to be achieved. I don't hold out much hope for vendor cooperation either. This will require real commitment and resources and in some cases may not be possible without legal problems. Best plan may be to form groups of people who need a particular proto and pool resources to get it done. Local I/O should require only drivers. There will be Linux friendly board vendors who will see this as an opportunity. Years of proprietary greed and self-interest won't be undone quickly, but we have a lot of powerful people reading this.
Please make sure anything you can contribute here can be GPL'd.

Here we'll include as many as we can, Sorry. The vendors need a change in thinking to promote instead of restricting, to open and guide rather
than close and control.

On reflection, It might have been easier to just do what I was going to do and then release it. I don't think that would work with this crowd. It must be your project from the start.

Regards

Curt Wuollet

_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

By Dan Pierson on 7 January, 2000 - 9:53 am

> From: Curt Wuollet [mailto:cwuollet@ecenet.com]
> Subject: LinuxPLC: Linux PLC: ISA Article and a Charter of sorts(long)

> If there's something here you simply can't live with, post a
> constructive alternative.
> If you see more wisdom or a better way to serve the whole community,
> post it. But, please do try to consider what's best for all.

> License: GPL
>
> Reasons: So it's owned by everyone equally and can't be subverted to
> exclude anyone. Including everyone excluding no one.

I really like this, but it has a major problem. It may make the result of this project unusable by too many people. To have a significant effect on opening the automation industry we need to have a lot of users. The problems are:

+ Do applications written to run on the resulting PLC have to be GPL? If so,
this will be a big problem with most potential users. As I understand the GPL, applications are restricted in the following cases:

- The application actually links to libraries that are part of the PLC.
This would seem to cover all applications written in C/C++ or any other natively compiled language.

- The application includes "significant" parts of GPL code. For example, for many years any parser produced by Bison had to be GPL because of the parser template that Bison included in its output. This led to some less restrictive
Bison alternatives and eventually to a change in the Bison license. This case might well apply to applications generated by our as yet undefined
development tools.

+ FUD: Do potential users (or their lawyers) think that their applications will have to be GPL even if they won't? Arrgh! Seriously, this has been a significant impediment to use of free code in the past at many conservative companies.

What can we do about this without loosing the benefits of the GPL?

- Use a Perl-like dual license?
(see http://www.linux-mag.com/1999-10/uncultured_01.html
for Larry Wall's explanation of the reasoning behind this)

- Use the LGPL for everything?

- Use different licenses for different parts of the project.

My final thoughts:

1. There's a reason that Debian supports more than one type of open source license.
2. Ken has good reasons for wanting to get a lawyer involved. We're programmers, not lawyers (at least most of us).

Now I'll shut up about this until we get an expert opinion.

Dan Pierson

_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

By Lynn Linse on 7 January, 2000 - 2:33 pm

> -----Original Message-----
> From: Curt Wuollet [mailto:cwuollet@ecenet.com]
> Sent: Friday, January 07, 2000 2:28 PM
>
> All:
> We must develop a sense of community and mutual appreciation. That, not the simple fact of Open Source, is the essence of making this thing work. We all need to check our commercialism at the door. ....
>
>License: GPL
>
>Reasons: So it's owned by everyone equally and can't be subverted to exclude anyone. Including everyone excluding no one. <

May I mention I have 2 teen-age daughters? One in college? The idea of working for the common good is fine for young bachelors, but even Linux (I
have to say) is successful because of the RedHat's and Cauldra's of the world who have made it more than a college play thing. I cannot ask my
daughter to drop out of school just because I want to produce free code for a better world. The U bursars office doesn't cash "recognition" checks. They prefer hard cash.

I agree with Dan Pierson that in the end there has to be some way to make money with the LinuxPLC. If users can create (& charge for) various custom functions blocks, then I think the LinuxPLC has a real chance for success.

Examples:

1) I take Jiri's text ladder, tweak it a bit and try to sell it as my "add-on" to the LinuxPLC. Who will buy? No-one. Unless I can add real value
no one buys.

2) I can create a custom "Add 2 integer" function block - again, no one buys.

3) I create a special US-Customs approved custody transfer block for gas-temp compensation. That I can sell - in fact offering GPL for this would
prevent it ever being approved. If I can tweak it to save me & my custom from paying petro taxes ...

4) I create a new cascaded, auto-tuning PID-like controller which can put all the other guys out of business. Good, we can move 10,000 copies of
LinuxPLC into industry a month! But if it's GPL, then they all just copy the code & put it into their existing products. Just because LinuxPLC is
"software for free", that doesn't mean in the field it is overall cheaper.

I think we need the basic LinuxPLC and the various drivers & general command interpreters to be GPL, but we must allow people to add proprietary value and support their families or it becomes a good hobby. I see no problem in
people being able to create non-GPL functions and modules for the LinuxPLC. No user is required to buy from them & every "sale" means another real user with the LinuxPLC. This has to help the overall project.

I don't see what the worry here is. Even if someone takes the WHOLE LinuxPLC and passes it off as their own, if they have added no value, no one buys. And any innovation they add - if valuable enough - could be added by others and possibly even under GPL.

I better read the GPL and LGPL better. Enlightened self-interest is a great motivator, but one still has to eat.

Best Regard
- Lynn Linse

_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

By Dan Pierson on 7 January, 2000 - 5:07 pm

Lynn Linse [mailto:lynnl@lantronix.com] wrote:

> I agree with Dan Pierson that in the end there has to be some way to make money with the LinuxPLC. If users can create (& charge for) various custom functions blocks, then I think the LinuxPLC has a real chance for success. <

I'm happy that you agree with me, but that's not what I meant :-)

The model you suggest could be and has been used in the open source world. For example, there are several closed source commercial versions/descendents of XFree86 that offer better performance, drivers and support than the free
product. However, this is the exception rather than the rule.

The users whose license concerns I worry about are:

1. In house automation. This includes direct use of internal staff and systems integrators and consultant with expertise in the LinuxPLC. The
automation application very likely implements trade secrets of the client. If they have to
reveal their source code, they won't use the LinuxPLC. (There have been questions about just how the GPL applies to internal distribution of
multiple copies of the same application.)

2. OEM industrial equipment manufacturers. They ship a physical machine including a controller running a source code that they don't want to reveal to their competitors. Basically the same as above.

Given the above, I see the main ways of making money from the LinuxPLC as:

Personal:
1. Applications programming whether as staff, systems integrator, consultant or contractor.
2. Support and service, probably as an employee.
3. Maybe in the future: LinuxPLC development work as a paid employee of a supporting company. Several companies in the Linux world have a number of full time paid employees each doing Linux development. This is not a charity -- these companies expect to make money off of the success of Linux.

Corporate:
1. Automation, using the LinuxPLC because of increased control of future developments and increased quality and support responsiveness of the controller. In other words, companies will keep paying for automation just as they do now, but use a LinuxPLC instead of some other controller.
2. Systems integration, support and service. Again, same as now.
3. Hardware and systems sales, possibly with bundled service.

I don't think that anyone on this list needs to feel that this project threatens their livelihood.

Dan Pierson


_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

By Ken Crater on 7 January, 2000 - 6:46 pm

Curt posted:
> One little thing though, the currency at the open source bazaar is recognition. I won't bitch about WOT not being mentioned if you and all
for that matter, keep clearly in mind that this is a community project. The thing that will get people to code and contribute is to recognize
those actions. Yours is a major contribution at this critical stage, for that we all thank you. <

...with which I agree completely. Sorry you weren't mentioned, but I have to point out that I didn't write the article. I was asked by the editor for some comments about the project and why Control.com was interested in helping out, and I supplied same, and then he wrote the article. I certainly hope that in the future there'll be opportunities for (lots of)
recognition all the way 'round :-). Perhaps we can start by having some suitable attributions on the web entry page of linuxplc.org?

Also, I'd like to take the lead in applauding the balanced approach Curt is taking in the suggested ground rules. There will be many in the user
community who are not familiar with the processes of a project such as this -- so I hope we need not get too defensive when their questions sound like attacks. We should listen to all concerns, and decide which might validly affect our direction and which are simply the result of misunderstanding (which will be legion).

Regards,
Ken Crater
Control.com Inc.
ken@control.com


_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

By Curt Wuollet on 7 January, 2000 - 8:28 pm

Hi Dan and All:

I don't have much flexibility around this point for the following reasons: First, If it's not GPL, we'll have 40 incompatible "enhanced versions". In any other market I could see some leeway, but this is the market that has developed 40 different ways to talk to remote I/O, all incompatible. The viral clause that you're worrying about is there for exactly that
reason. Second, don't kid yourself, the customers are getting tired of the game too, and Open Source might open doors instead of closing them. I'll bet the customer would pay for the openness if it was pitched right. Third, Most of the money to be made is for services anyway, few people
sell code more than once. If you think about that, what's the point in jealously protecting it? Since each project is so unique, how often
would you benefit from someone else's work? Some customers might use it to switch providers, but, if they're that mad, it's probably a good
thing. Do you really make enough on shrinkwrap products to make up for the headaches that model causes? Who gets blamed when it doesn't work.
Fourth, Wouldn't the same Open Source virtues apply, just a little to automation software? Either Open Source is good or not. I'm glad that
people are really thinking about this as it's central to the project. You, more than any group alive, know how closed and proprietary works.
That's why I have a room full of automation equipment that won't work together and the "standards" committees are an expensive joke.
Demings Definition of Insanity is when you keep doing things the same way and expect the results to be different. Think Open.

Curt Wuollet,
Wide Open Technologies










Dan Pierson wrote:
>
> > From: Curt Wuollet [mailto:cwuollet@ecenet.com]
> > Subject: LinuxPLC: Linux PLC: ISA Article and a Charter of sorts(long)
>
> > If there's something here you simply can't live with, post a
> > constructive alternative.
> > If you see more wisdom or a better way to serve the whole community,
> > post it. But, please do try to consider what's best for all.
>
> > License: GPL
> >
> > Reasons: So it's owned by everyone equally and can't be subverted to
> > exclude anyone. Including everyone excluding no one.
>
> I really like this, but it has a major problem. It may make the result of
> this
> project unusable by too many people. To have a significant effect on
> opening the
> automation industry we need to have a lot of users. The problems are:
>
> + Do applications written to run on the resulting PLC have to be GPL? If
> so,
> this will be a big problem with most potential users. As I understand the
> GPL,
> applications are restricted in the following cases:
>
> - The application actually links to libraries that are part of the PLC.
> This
> would seem to cover all applications written in C/C++ or any other
> natively
> compiled language.
>
> - The application includes "significant" parts of GPL code. For example,
> for
> many years any parser produced by Bison had to be GPL because of the
> parser
> template that Bison included in its output. This led to some less
> restrictive
> Bison alternatives and eventually to a change in the Bison license.
> This case
> might well apply to applications generated by our as yet undefined
> development
> tools.
>
> + FUD: Do potential users (or their lawyers) think that their applications
> will
> have to be GPL even if they won't? Arrgh! Seriously, this has been a
> significant impediment to use of free code in the past at many
> conservative
> companies.
>
> What can we do about this without loosing the benefits of the GPL?
>
> - Use a Perl-like dual license?
> (see http://www.linux-mag.com/1999-10/uncultured_01.html
> for Larry Wall's explanation of the reasoning behind this)
>
> - Use the LGPL for everything?
>
> - Use different licenses for different parts of the project.
>
> My final thoughts:
>
> 1. There's a reason that Debian supports more than one type of open source
> license.
> 2. Ken has good reasons for wanting to get a lawyer involved. We're
> programmers,
> not lawyers (at least most of us).
>
> Now I'll shut up about this until we get an expert opinion.
>
> Dan Pierson
>
> _______________________________________________
> LinuxPLC mailing list
> LinuxPLC@linuxplc.org
> http://linuxplc.org/mailman/listinfo/linuxplc

_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

By Curt Wuollet on 7 January, 2000 - 8:39 pm

Hi Lynn

I'm neither young nor a bachelor. Consulting is the biggest profit center it pays better than hardware or straight software. Let's say we all start with a shrinkwrap product. Where's the value add and why would that change?

Curt W


Lynn Linse wrote:
>
> May I mention I have 2 teen-age daughters? One in college? The idea of working for the common good is fine for young bachelors, but even Linux (I have to say) is successful because of the RedHat's and Cauldra's of the world who have made it more than a college play thing. ...<

_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

On Fri, Jan 07, 2000 at 04:40:30PM -0800, Lynn Linse wrote:
> > -----Original Message-----
> > From: Mark Hutton [mailto:mark.hutton@vogal.demon.co.uk]
> >
> > First the Linux Lab project. This project has drivers for PC I/O cards, Siemens and Schneider comms. plus lots of other goodies.
http://www.llp.fu-berlin.de < <
>
> This project looks good, but seems to have died - the last email in their Archive is dated Oct 1998, plus the ftp server seems to down (as are ALL 6 of the mirrors).<

I've been lurking on the LLP list, and the last post I see was dated 22 Dec 1999. Maybe the problem is with the archives? I've just sent a query to the list owner to try to find out where they are. I'm certain there is a great deal of
information from the LLP that will be applicable to this project.

--
Ken Irving
Trident Software
jkirving@mosquitonet.com


_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

By Phil Covington on 8 January, 2000 - 2:24 pm

From: "Curt Wuollet" <cwuollet@ecenet.com>

<snip>

> My personal opinion? Porting to a proprietary OS defeats the entire purpose and will make the the project a part of the problem rather than a solution. Flame away.<

I have to disagree with you on this. Surely, if anything comes of this project, you would have to expect that it will be ported to Win32. I fail
to see why it would be any less open if it were. There is plenty of open source software that has been ported to Windows. To ignore other
"proprietary" OSes, I think, would be a terrible mistake.

While we are all entitled to our opinions, I think that an anti-Microsoft bias here is inappropriate and will turn off some people who otherwise would make valuable contributions to this project. Personally I have been
programming Linux since kernel 1.2.2 (1995) , and MS Windows since the beginning; there are some aspects of Linux and Windows that I like and some
aspects of both that I dislike.

Phil Covington
vHMI


_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

By Curt Wuollet on 10 January, 2000 - 4:56 pm

Hi Phil,

Yes I would expect that. But please read and understand your EULA and the GPL. That's why proposed policy differs from my personal opinion. Fragmentation at this stage will almost guarantee failure. That's why I simply ask that we have a system to port first. Who knows? We might be successful enough to get blown out of the water by MicroSoft(tm.) ControlX(tm.) Version 1.0

_______________________________________________

LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

By Phil Covington on 10 January, 2000 - 5:04 pm

Curt,

I totally agree that this system should be developed under Linux and that Linux is the best OS to get a system like this up and running. The system then could be ported to what ever OS you might have need to use. It is just my observation that many people are turned off to the idea of changing to Linux by the overzealous pro-Linux advocates that love to bash MS... Look
at all the soft-PLC packages out there available for Win NT... to get companies to adopt the LinuxPLC it will take much more than casting a few
dispersions directed at Microsoft and Windows.

I have been using Linux with the KURT v1.23 (and now v2.0) patch for a soft-PLC like system. Since I wanted to be able to communicate via ethernet to a Host Engineering EBC module and Automation Direct I/O I didn't use RT-Linux. I now see that there is a PCI ethernet realtime driver available for RT-Linux though... I contacted Host Engineering and requested their
Ethernet SDK source code so that I could port it to Linux. The control program is written in C as a RTMOD and is loaded or unloaded from the kernel as needed. Unfortunately I have not gotten around to writing a Ladder to C translator yet that then could be compiled to a RTMOD. I have also had good luck with using Opto 22's SNAP with the system. Right now I am trying to figure out how to talk to Optimation's Opti-Logic ethernet system since they only have a Windows SDK available. Lots of fun...

Phil Covington
vHMI

_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

It's not anti microsloth, it's technical. Writing code to the least common demoniator, if you include lose32, limits the available technology way too much.

--
Stan Brown stanb@netcom.com 843-745-3154
Westvaco
Charleston SC.
--
_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

By Phil Covington on 10 January, 2000 - 5:02 pm

I was not proposing that coding should be done to the least common denominator... If it is coded for Linux then the ideas behind the code design can be used to code for other operating systems.

What I was alluding to is that using terms like "Windoze", "Microsloth", and "lose32" is silly and childish. A better place for that would be in the OS advocacy newsgroup. If this is to be a truly "open" project, then why would anyone care what operating system it was ported to, proprietary or not? All I am saying is that this is not a trivial project and so it would be better
not to alienate potential coders by adopting a snooty "my operating system is better (or more open) than your operating system" attitude in this group.

Phil Covington (L[ose]nix and Windoze Programmer <grin>)
vHMI

_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

By Curt Wuollet on 10 January, 2000 - 5:17 pm

Peace please gentlemen. I have been trying hard to stay out of religion and I apologize for using the term windoze after a long and frustrating
day. For the purposes of this project, I hwve been grouping it under proprietary operating systems to be non-discriminatory. We seem to all
agree that Linux is the platform for now, the other discussion IS more suitable for Slashdot. Let's beat the swords into plowshares and direct
the zeal into fleshing out the memory map. For my part, I will try not to editorialize. The ControlX thing was (I hope) a joke. Any comments
on Fred's paper? I don't see a need for the ring buffering, but the rest seems like a good way to be as flexible as possible.

The Modbus map I am doing is a struct of arrays grouped by type, digital ins are unsigned short as are digital outs. The modbus holding registers and some other feature registers are Unsigned
int. Analog ins, highs and lows are float and there are some other misc types. The idea was to group the arrays that would be scanned
individually and seperate from the items that would be set on initialization. The grouping by type will allow a function to be written for each type. Because the arrays have to accomodate a full
map and the most that the physical rack will hold is 64 points, I'm using another array to hold the limits to short circuit the scan to addresses actually occupied.

The part I am pondering right now is how to tie the data to a Modbus address. The map can be non contiguous so I either have to use a 2xn array and store data and address in the pair or
declare structs like below and use arrays of the structs.

#define BASE_ADDRESS 0xXXXXXX

typedef struct
{
unsigned short data;
int address;
} D_PT;

typedef struct
{
unsigned int data;
int address;
} A_OUT;
.
.
.

typedef struct
{
D_PT d_pts[256];
A_OUT a_outs[128];
.
.
.
int limits[8][2]; /* These would be the first and last array member filled */
} RACK

struct
{
RACX rack_0;
RACK rack_1;
.
.
.
} modmap;


That would be my drivers map. The next drivers map would start at

BASE_ADDRESS + sizeof(modmap);


This is workable I think but will be confusing to dereference.

Comments?



Curt Wuollet

struct

Curt Wuollet,
Wide Open Technologies

_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/

By Curt Wuollet on 8 January, 2000 - 2:26 pm

Lynn August Linse wrote:
>
> At 07:42 PM 1/7/00 +0000, you wrote:
> >I'm neither young nor a bachelor. Consulting is the biggest profit center , it pays better than hardware or straight software. Let's say we all start with a shrinkwrap product. Where's the value add and why would that change?< <

> I guess because I'm not a consultant. We need to sell things by the tens of thousands with small margin on each.<

> No insult intended, but imagine if all Ethernet cards where made 1 by 1 or 10/month by consultants ;-> would they cost $35 each? Somehow we need to create a synergy, so consultants and "marketing" companies can both gain together from LinuxPLC.<

No insult taken and I understand where your coming from. What I am trying to reconcile is that those $35.00 NICs come from a different type of market where you can buy your NIC from anybody. In this market they take a simple
serial card, proprietize it and they want $450.00 for it instead of $4.50. No value added, simply extortion. The honest value added in automation is the knowlege of how to use hardware and software to solve a problem or improve a process. That works regardless of the cost of the tools.

> Perhaps that is why Linux-Lab and OMAC have not done well considering their age. They are just tools to shift consultants cost from expense (hard $$) to time (soft $$). What if our company wants to see tens of thousands of units sold a month, not 1 or 2 projects a year? <

Your company should be greatly in favor of commoditizing the technology. Linux Lab is IMHO a victim of success, it simply doesn't take this dedicated band of scientists and engineers to do data acquisition and collection anymore.
OMAC has big problems with the definition of "open" They could have their open controllers if they hadn't mandated a closed proprietary system, NT. NIST has a Linux version of their EMC project demonstrated as working. This was ignored, the NT version is two years older and not working yet.
This was as of the last time I looked at it. Indeed, the shared memory technique I want to use came from Fred Proctor of NIST, I believe as part
of that project. This is what happens when people can't put aside their proprietary zeal to accomplish something as a community. Those who don't learn from history are doomed to repeat it. If OMAC were to embrace the Linux model and fund open development like this they would achieve their goal at Internet speed and orders of magnitude less cost. But, they simply
don't get it.

Curt W.

_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

By Ken Irving on 9 January, 2000 - 7:44 am

On Sun, Jan 09, 2000 at 12:01:33AM -0800, Alan Locke wrote:
> In regard to Curt Wuollet's shared memory and Modbus map posts:
>
> ...
> In this concept does each type of vendor IO have its own separate shared physical IO memory map? For that matter, should each individual IO device (a physical rack generally) have its own separate map, regardless of vendor?. An alternative could be a more generalized struct of arrays to handle any vendor device but I'm not sure how to implement this given the many differences in IO modules. <

Maybe by allocating memory separately, as needed by each device, and linking to this map.

> modules. Is a linked list an appropriate solution here? There is a lot of auxillary data in modern IO modules beyond just the IO data such as configuration data, status and timestamp that the logic engine needs.<

Depending on how specific vs generalized the target model PLC is, IMHO nothing short of an object-oriented approach may be necessary (without
regard to the implementation language). If the PLC that we're working on is fairly simple, with numbered registers only, then fixed allocation
might to be fine.

My impression is that Curt has a very specific model in mind, and it could well be that that's what the project is about. However, I wonder
if the Linux PLC can't be a more generalized thing, which can implement a register based PLC, but also can implement other types of controllers.

My impressions of PLCs are derived from (gasp ;) years of exposure to the HVAC industry's products, with the occasional industrial PLC
thrown in. Using PLCs with a Wonderware front-end, it occurred to me that one of the key differences between an HVAC sort of controller and
an industrial PLC is that the industrial version is lean and efficient, to the point that memory on the PLC is not wasted on names or other,
non-essential attributes.

On the HVAC side, IO and memory points are named in controller memory, and frequently other attributes are provided. The controller can be very stand-alone, including a level of self documentation. With a standard PLC, OTOH, a list associating registers with names needs to be maintained, maybe in the development tools or in a front end, but separately from the controller.

I don't think memory size is an issue with this project, and I don't we'll be hurting for performance if the target PLC is generalized. But
this might be beyond or just different from the project scope. I guess I ought to look at the project homepage to see what the scope is. ;)

> I haven't heard discussion on the subject of how to handle the asynchronous nature of real IO data. I would lean toward the conventional approach of having an intermediate memory map between the device drivers physical IO map
and the ladder logic engine that is updated in full on a periodic basis. I have seen some commercial PLCs that make the machine integrator themselves handle the asynchronous IO data problem (such as AB's ControLogix). Maybe
there are deeper design issues here? <

A description of some of the workings of different PLCs might be useful. Is there a simple statement of what _a_PLC_ is and does, matching all the varieties available?

--
Ken Irving
Trident Software
jkirving@mosquitonet.com


_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

By Phil Covington on 9 January, 2000 - 9:57 am

Ken Irving wrote:

>On the HVAC side, IO and memory points are named in controller memory, and frequently other attributes are provided. The controller can be very stand-alone, including a level of self documentation. With a standard PLC, OTOH, a list associating registers with names needs to be maintained, maybe in the development tools or in a front end, but separately from the controller.<
>
>I don't think memory size is an issue with this project, and I don't we'll be hurting for performance if the target PLC is generalized. But this might be beyond or just different from the project scope. I guess I ought to look at the project homepage to see what the scope is. ;)<

I think it would be a mistake to emulate the internal workings of a conventional PLC in the LinuxPLC too closely since conventional PLCs are
much more limited in memory space ( as in kilobytes, for most ) and memory management.

Alan Locke wrote:

>I haven't heard discussion on the subject of how to handle the asynchronous nature of real IO data. I would lean toward the conventional approach of having an intermediate memory map between the device drivers physical IO map and the ladder logic engine that is updated in full on a periodic basis. I have seen some commercial PLCs that make the machine integrator themselves handle the asynchronous IO data problem (such as AB's ControLogix). Maybe there are deeper design issues here?<

Maybe there should be controller memory that is not device specific that the Logic Engine has access to for its purposes. The controller memory would be flat and could contain any data type. There could be a mechanism (Controller
Memory Manager?) for mapping controller memory to driver memory and loading and unloading drivers through a defined Driver Interface. If a Driver
Interface is defined that every I/O driver must implement, then the details of how the driver deals with the I/O are seperated from the controller and Logic Engine. Curt's memory map for the Modbus devices would then exist in the Modbus driver and would be hidden from the controller and Logic Engine. Whether the driver deals with I/O in a polled or interrupt driven manner is left up to the device driver. The drivers are black boxes with a defined interface. The device driver would be responsible for updating itself in the controller's memory (input) and responding to updates (output) from the Controller Memory Manager. The Logic Engine just deals with the flat controller memory and has no idea of what specifc device it is communicating with. An Object Oriented approach here would be useful as you could encapsulate the controller's memory (data) with the Controller Memory Manager (procedures to operate on that data). This would also have the benefit of allowing you to communicate with many different devices ( Modbus,
Opto 22 SNAP via ethernet, serial DF1, etc...) all at once in the same LinuxPLC box.

On start up the details of which drivers to load, driver settings, etc.. could be stored in a simple DB that the Controller Memory Manager reads. This DB could also contain tags and other details that the Logic Engine needs to work with the controller memory through the Controller Memory Manager.

The core of the LinuxPLC then could be broken down to three basic components: Device Driver Interface, Controller Memory Manager, and Logic
Engine. People interested in a certain type of I/O device would implement the device driver (with a defined interface), another group could work on the Controller Memory Manager, and still another group would work on the Logic Engine. One these components are coded, then a Ladder, SFC, Logic Block, C, or whatever front end could be developed to generate a program that the Logic Engine executes.

Phil Covington



_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

By Gilles Allard on 11 January, 2000 - 1:05 pm

Some PLC implements the "image register" concept to insure the I/O values do not change during a scan. If the I/O process is asynchronous
(it has to be), then there should be a method to get an image (a snapshot) at the beginning of the scan and an update method to send the "image register" to the outputs.

Gilles

_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

By Curt Wuollet on 9 January, 2000 - 11:28 am

Alan Locke wrote:
>
> In regard to Curt Wuollet's shared memory and Modbus map posts:
>
> Thanks Curt for getting us started into some concrete details with your posts.
> I agree with your proposal of shared memory re the NIST information and also
> don't see a need for use of the ring buffer concept.
>
> In this concept does each type of vendor IO have its own separate shared
> physical IO memory map? For that matter, should each individual IO device (a
> physical rack generally) have its own separate map, regardless of vendor?. An
> alternative could be a more generalized struct of arrays to handle any vendor
> device but I'm not sure how to implement this given the many differences in IO
> modules. Is a linked list an appropriate solution here? There is a lot of
> auxillary data in modern IO modules beyond just the IO data such as
> configuration data, status and timestamp that the logic engine needs.

As I see it, each driver would have it's own physical map, some could build
a common map for several racks and some will probably make sense as seperate
entities. I am still struggling with the stuff that needs to be accessable
but, not every scan. I see this as having it's own update routines called as
neccessary. Think on it, those of you familiar with different equipment, and
let me know. Right now, I am just trying not to limit those future drivers. I
know Lynn had some input to add on doing things this way and I think, wants
some type of dynamic allocation. The way this works that would have to be
simulated eg, a "big enough map" with limit parms.
>
> I haven't heard discussion on the subject of how to handle the asynchronous
> nature of real IO data. I would lean toward the conventional approach of
> having an intermediate memory map between the device drivers physical IO map
> and the ladder logic engine that is updated in full on a periodic basis. I
> have seen some commercial PLCs that make the machine integrator themselves
> handle the asynchronous IO data problem (such as AB's ControLogix). Maybe
> there are deeper design issues here?

Absolutely, the design issues are very deep, that's why I am going slowly and
waiting for flashes of brilliance. I was hoping to do much of the work of the
additional layer you mention in the drivers, having this map just magically
updated so the PLC need only write and read. This implies some sort of an I/O
daemon or engine that does all the different stuff needed to perform an input
scan on demand and an output update when the page is written to. I like this
idea a lot because the I/O engine could be updated to include new drivers, etc.
without rewriting the whole system. I would like to support the "normal" cyclic
read, solve, write mode as well as a solve on change mode to save bandwidth.
Once I get through this hairy stuff that is right on the edge of kernel
hacking, I hope things will get easier. At this stage we should account for
all the types of data we are going to need to prove the validity of the model.
I don't think there's any way we can know what will be needed, so this will be
the most hacked upon part of the system. I'm resigned to using the most
flexible method I know and getting my little piece to work, knowing that
change will be the only constannt. We need for other people to understand it
so they can see if the stuff they know can be done this way.
>
> BTW, I haven't used Modicon equipment. Does anyone have some links for
> Modbus info?
www.modicon.com/openmbus will get you to the Modbus/TCP stuff. For the general
modbus spec, I had to search for their part number. Can someone from Modicon
find out if it's ok to post this stuff? It's big so we need an ftp site.
Opto22 provides fairly good info in their ENET Brain programming and users
guides on their specific mapping. I sure wish they'd fix the bootp problem.
www.opto22.com
>
> Alan Locke
> Controls Engineer, Boeing
>
> "My opinions are my own and not necessarily those of my employer"
>
> _______________________________________________
> LinuxPLC mailing list
> LinuxPLC@linuxplc.org
> http://linuxplc.org/mailman/listinfo/linuxplc


Curt Wuollet,
Wide Open Technologies, No Disclaimers.

_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

By Curt Wuollet on 9 January, 2000 - 12:19 pm

Ken Irving wrote:
>
> On Sun, Jan 09, 2000 at 12:01:33AM -0800, Alan Locke wrote:
> > In regard to Curt Wuollet's shared memory and Modbus map posts:
> >
> > ...
> > In this concept does each type of vendor IO have its own separate shared
> > physical IO memory map? For that matter, should each individual IO device (a
> > physical rack generally) have its own separate map, regardless of vendor?. An
> > alternative could be a more generalized struct of arrays to handle any vendor
> > device but I'm not sure how to implement this given the many differences in IO
> > modules.
>
> Maybe by allocating memory separately, as needed by each device, and linking
> to this map.

Maybe

> > modules. Is a linked list an appropriate solution here? There is a lot of
> > auxillary data in modern IO modules beyond just the IO data such as
> > configuration data, status and timestamp that the logic engine needs.
>
> Depending on how specific vs generalized the target model PLC is, IMHO
> nothing short of an object-oriented approach may be necessary (without
> regard to the implementation language). If the PLC that we're working
> on is fairly simple, with numbered registers only, then fixed allocation
> might to be fine.

The I/O engine might well benefit from OOP it would be simpler conceptually if each zany type of I/O was an object with it's data and whatever strange method had to be used to tickle it into giving it up. This would hide the swinging of dead chickens, emulating punchcards and magic incantations from a process that simply wants the data. I'm sorry, I wasn't going to editorialize on automation protocols either. Please forgive me.

> My impression is that Curt has a very specific model in mind,

For the first pass, yes.

> and it could well be that that's what the
> project is about. However, I wonder
> if the Linux PLC can't be a more generalized thing, which can implement
> a register based PLC, but also can implement other types of controllers.

Whatever you can write and convince the community to add.

> My impressions of PLCs are derived from
> (gasp ;) years of exposure to the HVAC
> industry's products, with the occasional
> industrial PLC thrown in. Using PLCs with a
> Wonderware front-end, it occurred to me
> that one of the key differences between an HVAC sort of controller and
> an industrial PLC is that the industrial version is lean and efficient,
> to the point that memory on the PLC is not wasted on names or other,
> non-essential attributes.

I am primarily concerned with doing things so as to take the best advantage of the Linux environment for performance and efficency
reasons and so we don't write an operating system in the process. My interest is in a solid foundation so that people have a framework
to add to and test ideas. My expertise, (such as it is) is in hardware and system software. I would probably enjoy using C instead of ladder.
You guys are the experts on what you want to DO with the thing. Together we should make a great team. I can probably code a ladder language or a state language if necessary, but I am really hoping someone who knows more about them (like Ken C :^)) Will do that. My other concern is that it stay accessable and simple enough that
you don't need to be a MSCS to contribute. (I'm not) and it can be used as an example in education or for people who want to know how
it works. If only a few high priests can understand it, it's not much more useful than the alternatives.

_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

By Curt Wuollet on 9 January, 2000 - 1:18 pm

Phil Covington wrote:
>
> Ken Irving wrote:
>
> >On the HVAC side, IO and memory points are named in controller memory,
> >and frequently other attributes are provided. The controller can be very
> >stand-alone, including a level of self documentation. With a standard
> >PLC, OTOH, a list associating registers with names needs to be maintained,
> >maybe in the development tools or in a front end, but separately from
> >the controller.
> >
> >I don't think memory size is an issue with this project, and I don't
> >we'll be hurting for performance if the target PLC is generalized. But
> >this might be beyond or just different from the project scope. I guess
> >I ought to look at the project homepage to see what the scope is. ;)
>
> I think it would be a mistake to emulate the internal workings of a
> conventional PLC in the LinuxPLC too closely since conventional PLCs are
> much more limited in memory space ( as in kilobytes, for most ) and memory
> management.
>
> Alan Locke wrote:
>
> >I haven't heard discussion on the subject of how to handle the asynchronous
> >nature of real IO data. I would lean toward the conventional approach of
> >having an intermediate memory map between the device drivers physical IO
> map
> >and the ladder logic engine that is updated in full on a periodic basis. I
> >have seen some commercial PLCs that make the machine integrator themselves
> >handle the asynchronous IO data problem (such as AB's ControLogix). Maybe
> >there are deeper design issues here?
>
> Maybe there should be controller memory that is not device specific that the
> Logic Engine has access to for its purposes. The controller memory would be
> flat and could contain any data type. There could be a mechanism (Controller
> Memory Manager?) for mapping controller memory to driver memory and loading
> and unloading drivers through a defined Driver Interface. If a Driver
> Interface is defined that every I/O driver must implement, then the details
> of how the driver deals with the I/O are seperated from the controller and
> Logic Engine. Curt's memory map for the Modbus devices would then exist in
> the Modbus driver and would be hidden from the controller and Logic Engine.
> Whether the driver deals with I/O in a polled or interrupt driven manner is
> left up to the device driver. The drivers are black boxes with a defined
> interface. The device driver would be responsible for updating itself in

Call this an I/O deamon and we're not very far apart. I envision a clear division at the map with I/O processes on one side and everything else on the other, this lessens the amount of running code that has to be synchronized for data consistancy. I like the flat map with data abstraction. All we are really talking about is moving more functionality to the I/O side.
Suppose we make the shared memory map just an array of say 16 bit shorts. suppose the driver maps and overall map exist on this same page?
The reason I don't want userland maps is that we may want either I/O or the logic engine or both to be realtime and explicitly scheduled.
The code for hard real time is RTLinux unless we want to write our own.

The problem I see in all of this is no matter how we do it, the PLC has to know about data types. In my method, I was thinking it could read from one overall struct for digital inputs, one for analog inputs, etc. These would remain constant. Even if we remap it has to know what it is looking at. If we allow any type of entity we have to add a method to the plc for each entity

> the controller's memory (input) and responding to updates (output) from the
> Controller Memory Manager. The Logic Engine just deals with the flat
> controller memory and has no idea of what specifc device it is communicating
> with. An Object Oriented approach here would be useful as you could
> encapsulate the controller's memory (data) with the Controller Memory
> Manager (procedures to operate on that data). This would also have the
> benefit of allowing you to communicate with many different devices ( Modbus,
> Opto 22 SNAP via ethernet, serial DF1, etc...) all at once in the same
> LinuxPLC box.

I was hoping the memory map would be the interface and we would use the existing machinery for unloading and loading modules. It's uncomitted
memory, what can't we put there?

>
> On start up the details of which drivers to load, driver settings, etc..
> could be stored in a simple DB that the Controller Memory Manager reads.

for instance the Berkely db that already exists

> This DB could also contain tags and other details that the Logic Engine
> needs to work with the controller memory through the Controller Memory
> Manager.
>
> The core of the LinuxPLC then could be broken down to three basic
> components: Device Driver Interface, Controller Memory Manager, and Logic
> Engine. People interested in a certain type of I/O device would implement
> the device driver (with a defined interface), another group could work on
> the Controller Memory Manager, and still another group would work on the
> Logic Engine. One these components are coded, then a Ladder, SFC, Logic
> Block, C, or whatever front end could be developed to generate a program
> that the Logic Engine executes.

Let's reconcile on requirements.

cww

_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

On the DL205 from AutomationDirect (Koyo) and I think most PLCs, the routing is
1. Scan inputs
2. Implement logic
3. Set outputs
Additionally, the DL205 (and the SLC50x, if I remember correctly) gives you "Immediate Inputs" and "Immediate Outputs".

Also, Intellution's FIX (also Wonderware, I believe) lets you indicate which "registers" (tags) to update and how often.

I see a way that we might integrate these two things, allowing for very scalable I/O memory. We could allow the application program to indicate
which I/O ranges to control, and when; i.e. define blocks of memory (an I/O database table, for instance) and offer a time-based daemon entry
(cron-like, but on a millisecond scale?), or even a "scan flag" set from user logic that says "get this block of inputs before the next logic
scan" or "set these outputs after this scan".

Also, is there any reason to deal with 16 bit shorts? Most likely we'll be on 32 bit processors, and PLC data types seem most commonly dictated by processor structure. We may be best of with unsigned 32 bit numbers as "native" and fastest. Along the same line, I think many of us are used to keeping track of what's binary, octal, hex, BCD, etc. in our data and instructions ("gee, does that timer count in BCD or binary?" -famous last words on an Omron or Koyo!), so we should be able to standardize that greatly with Linux underneath.

Rob Martin
robm@linuxfreak.com

_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

On Sun Jan 9 14:11:16 2000 Rob Martin wrote...
>
>Curt Wuollet wrote:
>>
>> Phil Covington wrote:
>> >
>> > Ken Irving wrote:
>>
>> Call this an I/O deamon and we're not very far apart. I envision a clear division at the map with I/O processes on one side and everything else on the other, this lessens the amount of running code that has to be synchronized for data consistancy. I like the flat map with data abstraction. All we are really talking about is moving more functionality to the I/O side. Suppose we make the shared memory map just an array of say 16 bit shorts. suppose the driver maps and overall map exist on this same page?
The reason I don't want userland maps is that we may want either I/O or the logic engine or both to be realtime and explicitly scheduled. The code for hard real time is RTLinux unless we want to write our own.<<
>
>On the DL205 from AutomationDirect (Koyo) and I think most PLCs, the routing is
> 1. Scan inputs
> 2. Implement logic
> 3. Set outputs

Most A'B PLC's work this way. Although there are models that support asunch I/O. This has an I/O scanne process that reads, and writes
from/to the I/O data table at it's own speed. Makes for some interesting omplications in the application program.

>Additionally, the DL205 (and the SLC50x, if I remember correctly) gives you "Immediate Inputs" and "Immediate Outputs". <

A feature which every so often is useful. Howevr in general the impact on the application program hurts almost as much as the feature helps.

>Also, Intellution's FIX (also Wonderware, I believe) lets you indicate which "registers" (tags) to update and how often.<

Good application enables (factory Link for one) group these into "tables" which have triigeres. The users application can then control
this, for example update fats, if the screen is visible, slowly if not.

>Also, is there any reason to deal with 16 bit shorts? Most likely we'll be on 32 bit processors, and PLC data types seem most commonly dictated by processor structure. We may be best of with unsigned 32 bit numbers as "native" and fastest. Along the same line, I think many of us are used to keeping track of what's binary, octal, hex, BCD, etc. in our data and instructions ("gee, does that timer count in BCD or binary?" -famous last words on an Omron or Koyo!), so we should be able to standardize that greatly with Linux underneath.<

16 nits is the native size for a lot of digital I/O.


--
Stan Brown stanb@netcom.com 843-745-3154
Westvaco
Charleston SC.
--

_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

By Stan Brown on 9 January, 2000 - 5:20 pm

On Sun Jan 9 09:56:59 2000 Phil Covington wrote...
>
>Ken Irving wrote:
>
>>On the HVAC side, IO and memory points are named in controller memory, and frequently other attributes are provided. The controller can be very stand-alone, including a level of self documentation. With a standard PLC, OTOH, a list associating registers with names needs to be maintained, maybe in the development tools or in a front end, but separately from the controller.<<
>>
>>I don't think memory size is an issue with this project, and I don't we'll be hurting for performance if the target PLC is generalized. But
this might be beyond or just different from the project scope. I guess I ought to look at the project homepage to see what the scope is. ;)<<
>
>I think it would be a mistake to emulate the internal workings of a conventional PLC in the LinuxPLC too closely since conventional PLCs are
much more limited in memory space ( as in kilobytes, for most ) and memory management.<

There is a reason for that. Think of speed, think of program backup, think of understandabillty.


--
Stan Brown stanb@netcom.com 843-745-3154
Westvaco
Charleston SC.
--

_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

By Stan Brown on 9 January, 2000 - 5:23 pm

On Sun Jan 9 05:31:49 2000 Curt Wuollet wrote...
>
>www.modicon.com/openmbus will get you to the Modbus/TCP stuff. For the general modbus spec, I had to search for their part number. Can someone from Modicon find out if it's ok to post this stuff? It's big so we need an ftp site. Opto22 provides fairly good info in their ENET Brain programming and users guides on their specific mapping. I sure wish they'd fix the bootp problem. www.opto22.com<

Bear in mind, that since ModBus is an open standard, as oposed to say A/B RIO. many many third party vendors products speak some variant of ModBus. Beware of getting to tied up in Modicon's implementation.

--
Stan Brown stanb@netcom.com 843-745-3154
Westvaco
Charleston SC.
--

_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

By Stan Brown on 9 January, 2000 - 5:24 pm

On Sat Jan 8 17:58:18 2000 Curt Wuollet wrote...
>
>#define BASE_ADDRESS 0xXXXXXX
>
>typedef struct
>{
>unsigned short data;
>int address;
>} D_PT;
>
>typedef struct
>{
>unsigned int data;
>int address;
>} A_OUT;

Might want to rename this one to avoid confusion with assembler output
:-)


--
Stan Brown stanb@netcom.com 843-745-3154
Westvaco
Charleston SC.

_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

By Curt Wuollet on 9 January, 2000 - 8:16 pm

> Simon Martin wrote:
>
> This is a RFC (Request For Comment). I apologize for writing C code, but I
> think better in code than in any natural language (it's a sorry life). After
> reading quickly through all the mail from this list (it is a busy list isn't
> it), and weaving in a bit of my own experience I would like to say the
> following on the design:
>
> 1) IO
>
> Let's define an IO model. As far as I can see (please correct me if I'm wrong)
> the type of IO that is available is analog (12/16 bit)/digital,
> periodic/immediate update. If these are the only requierements then I would
> suggest that an IO node would be defined adequately by:
>
> enum io_update_type { io_update_periodic, io_update_immediate };
>
> struct tag_io {
> enum io_update_type update;
> long value;
> };
>
> 2) PLC engine.
>
> The PLC engine runs a set of rules on the IO to generate the output. Again,
> let's define a model. A simple model I can think of is the status of one or
> more outputs is defined the status of one or more inputs, this would translate
> as:
>
> enum comparison_type { comparison_equals, comparison_greater,
> comparison_greater_equal,
> comparison_less, comparison_less_equal }
>
> struct tag_condition {
> struct tag_io *io;
> long value;
> enum comparison_type comparison;
> };
>
> enum conjuntcion_type { conjuntion_and, conjunction_or, conjuntion_none };
>
> struct tag_rule {
> struct tag_condition lcondition;
> enum conjunction_type conjunction;
> struct tag_condition rcondition;
> };
>
> struct tag_list_io {
> struct tag_io io;
> struct tag_list_io *next;
> };
>
> struct tag_step {
> struct tag_rule *rule;
> struct tag_list_io *set
> }
>
> The io may be physical or virtual. The IO is stored in shared memory.
>
> 3) IO to Physical IO mapping
>
> A set of processes map the virtual IO to physical IO and feed the
> corresponding drivers. This process is user configurable via the file
> /etc/iomap.conf, which defines IO range, IO driver responsible and any other
> data required by the driver to identify the physical IO module.
>
> 4) The PLC Engine knows nothing about programming languages.
>
> A set of middleware translators are written to go between the PLC engine
> representation and any programming language (Ladder, etc). This has the
> advantage of being able to see the program as I like it, if I understand IL, I
> see it as IL, if I understand Ladder, I see it as ladder. A standard
> annotation format must be developed to be able to transport non-executable
> information from one format to another.
>
> 5) Keep everything isolated via abstraction models.
>
> OK we all bash MS Windows NT, but I think one of the greatest things to come
> out of it is the HAL (Hardware Abstraction Layer). This presents all the rest
> of the operating system with a "concept of a computer", filling in the blanks
> where required, translating where required. I would hope that this would be
> one of the goals. The PLC just understands conditions and io, other processes
> understand Ladder, ModBus, DH, DH+, etc.
>
> (Flame shields are up and at full strength, Mr Spock)


Hi Simon

(1)
I would like to see at least a provision for update on event. What do we do about
intelligent I/O that presents in IEEE floats, for example. And we need to deal with some weird data types for stuff like I've got that can be boolean or integer. What people tend to do is model with 16bit regs and you simply read more or less
of them and do the translation. A node struct will need a type member so the PLC "Knows" what it's dealing with or how do we abstract that?.

(2) (4) I agree that the languages should resolve to a consistant rule set. This could get really complex for state languages and others where a comparison calls a function rather than simply setting an output. These rules may have to include actions as well as comparisons. A subject for much study. I don't know how clean
this will get.

(3) sounds fine, some of this will can loaded to the map on init.

(5) Gets really interesting with a more or less random assortmant of hardware and data representations, but, it's a great goal. In this situation, there might be more code in a complete abstraction than there is in the rest of the
system. I vote to keep prudent abstraction in the drivers and keep the system size minimal. More like an embedded system than an operating system. In a trade off between elegance and performance small is beautiful. We can always bloat it
later :^)

cww

_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

By R A Peterson on 9 January, 2000 - 8:29 pm

I'd like to suggest that all I/O be represented in a structure.

I's suggest that the structure contain the following:

- The I/O points tag name
- A text descriptor
- Its raw range expressed as the implementor's choice of type (boolean, integer, long float, etc.)
- Its scaled range (also in the type specified by the implementor
- I/O fault code in case a RIO link fails, or some other fault is detected
- Maybe the last time the I/O point successfully updated

The individual I/O point would be mapped to physical I/O through another structure. This would allow simple changing from one type of I/O to another.



_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

Curt Wuollet:
> What do we do about intelligent I/O that presents in IEEE floats, for example. And we need to deal with some weird data types for stuff like I've got that can be boolean or integer. What people tend to do is model with 16bit regs and you simply read more or less of them and do the translation. A node struct will need a type member so the PLC "Knows" what it's dealing with or how do we abstract that?.<

I vote they should be converted to a native format. This ties in with point (5) below.

An interface to the raw data should be available, but not encouraged.

Simon Martin:
> > 2) PLC engine.
> >
> > The PLC engine runs a set of rules on the IO to generate the output. Again, let's define a model. A simple model I can think of is the status of one or more outputs is defined the status of one or more inputs, this would translate
as: ...
The io may be physical or virtual. The IO is stored in shared memory. < <

You'll probably find that to be a very limiting model; despite what ladder pretends, it doesn't actually do that :-)

I would suggest that initially at least the interface be in ordinary C. We'll want the thing to be programmable in C (among other things) and
it's much easier to translate a ruleset into C than the other way around. (Not to mention the performance advantage.)

> > 4) The PLC Engine knows nothing about programming languages.
> >
> > A set of middleware translators are written to go between the PLC engine representation and any programming language (Ladder, etc). < <

Yes. (In fact, the PLC engine should probably be in a separate process.)

> > 3) IO to Physical IO mapping
> >
> > A set of processes map the virtual IO to physical IO and feed the corresponding drivers. This process is user configurable via the file
/etc/iomap.conf, which defines IO range, IO driver responsible and any other data required by the driver to identify the physical IO module.< <

Curt Wuollet:
> (3) sounds fine, some of this will can loaded to the map on init.<

Yes. Don't forget that it should be able to add and remove drivers on-line. (Probably easiest to have them as separate processes again - we
can always make them loadable later.)

> > 5) Keep everything isolated via abstraction models.
> >
> > OK we all bash MS Windows NT, but I think one of the greatest things to come out of it is the HAL (Hardware Abstraction Layer). This presents all the rest of the operating system with a "concept of a computer", < <

Note that it goes the other way, too: it presents the hardware driver with a "concept of a control program", regardless of what language the
control program is in.


Jiri

_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

By Simon Martin on 10 January, 2000 - 6:44 am

Hi all,

Firstly I apologize for sending in HTML format. I won't do it again. Secondly taking on board the suggestions from Curt Wuollet,
PETERSONRA, Jiri Brown, Stan Brown, Alan Locke, I modify my original post. If others have replied, I'm sorry but I just have not
read them yet.

<snip>
>1) IO
>
>Let's define an IO model. As far as I can see (please correct me if I'm wrong) the >type of IO that is available is analog (12/16
bit)/digital, periodic/immediate >update. If these are the only requierements then I would suggest that an IO node >would be defined
adequately by:
>
>enum io_update_type { io_update_periodic, io_update_immediate };
>
>struct tag_io {
> enum io_update_type update;
> long value;
>};

This is changed to:

enum io_update_type { io_update_periodic, io_update_immediate };
enum io_value_type { io_value_bool, io_value_long, io_value_float };

struct tag_io_point {
enum io_value_type value_type; /* type of value */
char point_tag[MAX_TAG];
char point_description[MAX_DESCRIPTION];
union {
BOOL b;
long l;
float f;
} value;
};

struct tag_io_block {
enum io_update_type update_type; /* type of update */
long address_points; /* number of address points in this block */
char block_tag[MAX_TAG];
char block_description[MAX_DESCRIPTION];
struct tag_io_point point[];
};

Immediate input processing would be signaled from the IO mapping processes.
Immediate output processing would be signaled to the IO mapping processes.
Scaling would be a requirement for the PhysicalIO process.
The mapping between VirtualIO and PhysicalIO is a user dependant thing. We cannot force people to do one thing or another, if they
want to create a discontinuous map, there may be a very good reason for it (futur enhancements, etc.) and so we can't say no to
them.
If we create an IO block to represent a different PLC, how would we express it in terms of what we have got above, just as another
1000 points?

There are 2 discrete memory areas. One is the input for current scan area, the second is the output from current scan area and data
read from the physical io. They are rotated (just pointers being shuffled), with no data copying except that caused by the scan
itself. What happens here if the PLC writes to a value being read from the inputs? Registers must be separeted into Input/Output I
suppose.

>2) PLC engine.
>
>The PLC engine runs a set of rules on the IO to generate the output. Again, let's >define a model. A simple model I can think of is
the status of one or more outputs >is defined the status of one or more inputs, this would translate as:
<snip>
>The io may be physical or virtual. The IO is stored in shared memory.

Please correct me in this, this is the shakiest bit of my knowledge as I work in R&D in motion control, a combination of input
values will cause an action. The action can be set outputs, perform a function. This function can be an internal function block
written by us for the LinuxPLC, or an external binary written by the user for the LinuxPLC. Either way it resolves to a bit of code.
Are there any more variants to be taken care of?

As mere interest at this moment, if the PLC is signaled for the immediate change of a value mid-way through a scan, do we reset the
scan, just do the change, which may create an inconsistent view of the input conditions.

>3) IO to Physical IO mapping
>
>A set of processes map the virtual IO to physical IO and feed the corresponding >drivers. This process is user configurable via the
file /etc/iomap.conf, which >defines IO range, IO driver responsible and any other data required by the driver to >identify the
physical IO module.
>

I think we all agree here.

>4) The PLC Engine knows nothing about programming languages.
>
>A set of middleware translators are written to go between the PLC engine >representation and any programming language (Ladder,
etc). This has the advantage of >being able to see the program as I like it, if I understand IL, I see it as IL, if I >understand
Ladder, I see it as ladder. A standard annotation format must be >developed to be able to transport non-executable information from
one format to >another.

This is a set of external programs that precompile the incoming code to a standard set of rules run by the PLC. They are not to be
seen as "interpreters", they are "compilers". This is one of the reasons why we must create a very good rule model, as everything
interfaces with this.

>5) Keep everything isolated via abstraction models.
>
>OK we all bash MS Windows NT, but I think one of the greatest things to come out of >it is the HAL (Hardware Abstraction Layer).
This presents all the rest of the >operating system with a "concept of a computer", filling in the blanks where >required,
translating where required. I would hope that this would be one of the >goals. The PLC just understands conditions and io, other
processes understand >Ladder, ModBus, DH, DH+, etc.

As I mentioned earlier, I work in motion control. The company I work for as consultant has various axis interface cards (servo with
incremental encoder, servo with absolute encoder, servo with resolver, stepper encoder, stepper, analog out, etc), any combination
of which can be loaded in a controller. At the moment the low level servo code is full of exceptions caused by the fact that this
card works like this, this one doesn't, etc. It works very well, but maintenance is a nightmare as you are never really sure what a
change will incurr. Here we have a real nightmare, how many combinations can we have?

Again, your comments please.


_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

On Mon Jan 10 07:40:50 2000 Simon Martin wrote...
>
>Hi all,
>
>Firstly I apologize for sending in HTML format. I won't do it again. Secondly taking on board the suggestions from Curt Wuollet,
>PETERSONRA, Jiri Brown, Stan Brown, Alan Locke, I modify my original post. If others have replied, I'm sorry but I just have not
>read them yet.
>
><snip>
>>1) IO
>>
>>Let's define an IO model. As far as I can see (please correct me if I'm wrong) the >type of IO that is available is analog (12/16
>bit)/digital, periodic/immediate >update. If these are the only requierements then I would suggest that an IO node >would be defined
>adequately by:
>>
>>enum io_update_type { io_update_periodic, io_update_immediate };
>>
>>struct tag_io {
>> enum io_update_type update;
>> long value;
>>};
>
>This is changed to:
>
>enum io_update_type { io_update_periodic, io_update_immediate };
>enum io_value_type { io_value_bool, io_value_long, io_value_float };
>
>struct tag_io_point {
> enum io_value_type value_type; /* type of value */
> char point_tag[MAX_TAG];
> char point_description[MAX_DESCRIPTION];
> union {
> BOOL b;
> long l;
> float f;
> } value;
>};
>
>struct tag_io_block {
> enum io_update_type update_type; /* type of update */
> long address_points; /* number of address points in this block */
> char block_tag[MAX_TAG];
> char block_description[MAX_DESCRIPTION];
> struct tag_io_point point[];
>};
>
>Immediate input processing would be signaled from the IO mapping processes.
>Immediate output processing would be signaled to the IO mapping processes.

Both imediat inputs, and imediate ouptuts are trigered from the
application program. In the case of imediate inputs, the application
program waits on a fresh set of data, befre proceding. In the case of
imediate outputs, I am not certain if existing PLC's wait for this to
coomplete before resuming execution of the application program, but I
don't really see any reason a Linux Based PLC (which might be runing on
a multiprocessor machine) should do this.

>Scaling would be a requirement for the PhysicalIO process.
>The mapping between VirtualIO and PhysicalIO is a user dependant thing. We cannot force people to do one thing or another, if they
>want to create a discontinuous map, there may be a very good reason for it (futur enhancements, etc.) and so we can't say no to
>them.
>If we create an IO block to represent a different PLC, how would we express it in terms of what we have got above, just as another
>1000 points?

How about PLC_ID:MEMORY_LOCATION ?
>
>There are 2 discrete memory areas. One is the input for current scan area, the second is the output from current scan area and data
>read from the physical io. They are rotated (just pointers being shuffled), with no data copying except that caused by the scan
>itself. What happens here if the PLC writes to a value being read from the inputs? Registers must be separeted into Input/Output I
>suppose.

This used to be common. What happens, is that if physicall inputs have
been maped to this area, the user writen vaule is overwriten on the
next input scan. If however, there is no maping of that particular
input address to physical inputs the value will stay ther.

Which brings up the pint, that all data table needs to non-volatile, IE
if the sysstem goes down, all data table values should be init'd to the
last values, before the application program is re-enabled.

Also don't forget about non-I/O data table.
>
>>2) PLC engine.
>>
>>The PLC engine runs a set of rules on the IO to generate the output. Again, let's >define a model. A simple model I can think of is
>the status of one or more outputs >is defined the status of one or more inputs, this would translate as:
><snip>
>>The io may be physical or virtual. The IO is stored in shared memory.
>
>Please correct me in this, this is the shakiest bit of my knowledge as I work in R&D in motion control, a combination of input
>values will cause an action. The action can be set outputs, perform a function. This function can be an internal function block
>written by us for the LinuxPLC, or an external binary written by the user for the LinuxPLC. Either way it resolves to a bit of code.
>Are there any more variants to be taken care of?

Just please don't forget the run time statuses must be visible in the
language that they were programed in (ladder for instance), and that it
must be possible to edit this application code, in the original
language at run time. This should be done as an "edit set" whch can be
switched in and out very easily. This is required for testing run time
changes, and being able to back out of them very fast.

>
>As mere interest at this moment, if the PLC is signaled for the immediate change of a value mid-way through a scan, do we reset the
>scan, just do the change, which may create an inconsistent view of the input conditions.
>

The scan should see the value of the imnputs that ws true at the
begining of the scan. if I do an imediate input the values are updated,
and the scan is continued from there. Never shouls the system restart
the scan from the begining on it's own!

>>3) IO to Physical IO mapping
>>
>>A set of processes map the virtual IO to physical IO and feed the corresponding >drivers. This process is user configurable via the
>file /etc/iomap.conf, which >defines IO range, IO driver responsible and any other data required by the driver to >identify the
>physical IO module.
>>
>
>I think we all agree here.
>
>>4) The PLC Engine knows nothing about programming languages.
>>
>>A set of middleware translators are written to go between the PLC engine >representation and any programming language (Ladder,
>etc). This has the advantage of >being able to see the program as I like it, if I understand IL, I see it as IL, if I >understand
>Ladder, I see it as ladder. A standard annotation format must be >developed to be able to transport non-executable information from
>one format to >another.
>
>This is a set of external programs that precompile the incoming code to a standard set of rules run by the PLC. They are not to be
>seen as "interpreters", they are "compilers". This is one of the reasons why we must create a very good rule model, as everything
>interfaces with this.

I don't see how they can be compilers, and give the run time viewing,
and editing that is required. I don't object to compiling the code, I
just want to make certain we don't miss the basic functionality here.

>
>>5) Keep everything isolated via abstraction models.
>>
>>OK we all bash MS Windows NT, but I think one of the greatest things to come out of >it is the HAL (Hardware Abstraction Layer).
>This presents all the rest of the >operating system with a "concept of a computer", filling in the blanks where >required,
>translating where required. I would hope that this would be one of the >goals. The PLC just understands conditions and io, other
>processes understand >Ladder, ModBus, DH, DH+, etc.
>
>As I mentioned earlier, I work in motion control. The company I work for as consultant has various axis interface cards (servo with
>incremental encoder, servo with absolute encoder, servo with resolver, stepper encoder, stepper, analog out, etc), any combination
>of which can be loaded in a controller. At the moment the low level servo code is full of exceptions caused by the fact that this
>card works like this, this one doesn't, etc. It works very well, but maintenance is a nightmare as you are never really sure what a
>change will incurr. Here we have a real nightmare, how many combinations can we have?

I don't understand the question. Please resate.


--
Stan Brown stanb@netcom.com 843-745-3154
Westvaco
Charleston SC.
--
______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

By Dan Pierson on 10 January, 2000 - 12:32 pm

> From: Curt Wuollet [mailto:wideopen@ecenet.com]
> Subject: Re: LinuxPLC: Linux PLC: ISA Article and a Charter of
> sorts(long)

> I am primarily concerned with doing things so as to take the best
> advantage of the Linux environment for performance and efficency
> reasons and so we don't write an operating system in the process.

Absolutely!

> My interest is in a solid foundation so that people have a framework
> to add to and test ideas. My expertise, (such as it is) is in hardware
> and system software. I would probably enjoy using C instead of ladder.
> You guys are the experts on what you want to DO with the thing.
> Together we should make a great team. I can probably code a ladder
> language or a state language if neccessary, but I am really hoping
> someone who knows more about them (like Ken C :^)) Will do that.

I'm the main designer and a major implementor of our Quickstep products so
I'll try to help a little :-). My main main expertise is in development
languages and tools in general; I'm a relative newbie to automation (been
here about 7 years).

Fundamentally, I seriously doubt that mapping a state language like a
cleaned up Quickstep into an RLL/IEC1131 fixed processing loop is a
realistic goal. For example: Quickstep performs only the required IO at the
time that it's required. Some IO takes significant time to read or write
and we can't afford to waste it.

It is very desirable for this project to support both types of languages.
To do that, I think that we need to separate the RTLinux/kernel support for
industrial IO from the support for the PLC programming cycle. Some types of
programming cycle support will probably also have to be in RTLinux or kernel
for adaquate performance in some applications (is that weasily enough for
you :-)). A basic PLC cycle might be one of these. Since I agree that
writing end user apps in kernel mode is a very bad idea, I suspect that
languages such as state based languages with complex processing cycles will
best be implemented by an RTLinux bytecode interpreter for a specialized
abstract machine. It would be easier on all of us if there were as few of
these abstract machines as possible, but the minumum realistic number is
probably greater than one.

As a first step, I'm serious considering trying to hook one or more of the
free scripting languages available on Linux (starting with Python) to your
initial PLC core. This may seem silly: "such an approach can't possibly
offer real time performance". Not for many applications, maybe for some,
but that's not the important point. The idea would be to provide test bed
on which we could simply prototype some of the candidates for lower level
abstract machines. Does this sound crazy to everyone else?

Ken Irving says:
> > A description of some of the workings of different PLCs might be
> > useful. Is there a simple statement of what _a_PLC_ is and does,
> > matching all the varieties available?

This is really a key question:
If the definition of "PLC" is limited to something with a fixed:

scan inputs
compute logic (maybe with function block kludges)
set outputs

Then I see this project as limiting itself to technology that was at least
obsolecent when Linux was invented. While it's important to support that
technology, and I realize that some members of this list are interested in
nothing else, it's at least equally important to provide for evolution to
more modern and effective approaches. Yes, we at Control Tech believe that
we have one of these. I *do not* believe that it's the only possible one,
nor am I trying to push it as part of the LinuxPLC, but I think that it
would be a serious long term mistake to make such approaches excessively
difficult to fit into this project.

<Asbestos On>

Dan Pierson
Control Technology Corp.

_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

On Mon, Jan 10, 2000 at 11:02:45AM -0500, Stan Brown wrote:
> On Mon Jan 10 07:40:50 2000 Simon Martin wrote...
> >
(snip)
>
> >Please correct me in this, this is the shakiest bit of my knowledge as I work in R&D in motion control, a combination of input
> >values will cause an action. The action can be set outputs, perform a function. This function can be an internal function block
> >written by us for the LinuxPLC, or an external binary written by the user for the LinuxPLC. Either way it resolves to a bit of code.
> >Are there any more variants to be taken care of?
>
> Just please don't forget the run time statuses must be visible in the
> language that they were programed in (ladder for instance), and that it
> must be possible to edit this application code, in the original
> language at run time. This should be done as an "edit set" whch can be
> switched in and out very easily. This is required for testing run time
> changes, and being able to back out of them very fast.

(snip)

> >>4) The PLC Engine knows nothing about programming languages.
> >>
> >>A set of middleware translators are written to go between the PLC engine >representation and any programming language (Ladder,
> >etc). This has the advantage of >being able to see the program as I like it, if I understand IL, I see it as IL, if I >understand
> >Ladder, I see it as ladder. A standard annotation format must be >developed to be able to transport non-executable information from
> >one format to >another.
> >
> >This is a set of external programs that precompile the incoming code to a standard set of rules run by the PLC. They are not to be
> >seen as "interpreters", they are "compilers". This is one of the reasons why we must create a very good rule model, as everything
> >interfaces with this.
>
> I don't see how they can be compilers, and give the run time viewing,
> and editing that is required. I don't object to compiling the code, I
> just want to make certain we don't miss the basic functionality here.

There are options in between interpreting and compiling in modern systems,
and this can cloud the issue. Perl interprets code at runtime, but it
first compiles (whether to native or some pcode I'm not sure). Actual
(simple) runtime interpreters involve parsing text continuously, and
surely the PLC engine won't do that.

> >>5) Keep everything isolated via abstraction models.
> >>
> >>OK we all bash MS Windows NT, but I think one of the greatest things to come out of >it is the HAL (Hardware Abstraction Layer).
> >This presents all the rest of the >operating system with a "concept of a computer", filling in the blanks where >required,
> >translating where required. I would hope that this would be one of the >goals. The PLC just understands conditions and io, other
> >processes understand >Ladder, ModBus, DH, DH+, etc.
> >
> >As I mentioned earlier, I work in motion control. The company I work for as consultant has various axis interface cards (servo with
> >incremental encoder, servo with absolute encoder, servo with resolver, stepper encoder, stepper, analog out, etc), any combination
> >of which can be loaded in a controller. At the moment the low level servo code is full of exceptions caused by the fact that this
> >card works like this, this one doesn't, etc. It works very well, but maintenance is a nightmare as you are never really sure what a
> >change will incurr. Here we have a real nightmare, how many combinations can we have?
>
> I don't understand the question. Please resate.

It sounds to me that the low level code in question was developed
under some set of assumptions, or based on one or more target card
models. Then, to accomodate other cards, some sort of conditionals
(or run time exceptions) cause branching/subbing to handle different
special cases. I think he (Simon?) is hoping this does not happen in
the Linux PLC project.

I don't know if a single PLC engine can accomodate all the options, i.e.,
to emulate existing PLCs or other models as they may come along. If
it can, then a specific PLC model might be cleanly represented by
a translator, which would actively link to the PLC engine and visa
versa. An alternative approach might be to completely swap out the
PLC engine with one that specifically targets one (or more) PLC models.
The nightmare approach would be to have a single engine which runs all
the models by conditionals or exceptions. Just my $0.02.

--
Ken Irving
Trident Software
jkirving@mosquitonet.com


_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

By Mark Bayern on 12 January, 2000 - 9:00 am

Simon Martin wrote:
>
> In the motion control world you've got CNC and servo controllers. I would
> not like to specify exactly what a CNC is, but if it does not have g-code,
> then it ain't.

Oh, I don't know ... <smile> ... names can be difficult things.

Early Hurco controllers did not have G-codes. The original Summit Engineering Bandit controllers (supposedly the first CNC with MDI input)
didn't have G-codes for the first two years.

Mark

_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

On Sun, Jan 09, 2000 at 10:20:21AM -0500, Stan Brown wrote:
> The idea of having a seerate shared memort area for each type is probably OK. I can't see a given system talking to to many different brands of I/O.<

What if one of the I/O devices goes west and you replace it with a different brand?

I vote for the Controller Memory Manager scheme. It should be possible to muck around to any extent with the actual I/O devices without having to touch the program.

(Yes, this means each point individually mapped between the memory map and the device driver.)

> >I would lean toward the conventional approach of having an intermediate
> >memory map between the device drivers physical IO map and the ladder
> >logic engine that is updated in full on a periodic basis.
...
> The convential approach is correct here in 95% of the cases.

The rest of the time, there can be a special instruction saying "update these 3 I/O points" which will cover the rest of the cases.


Jiri
--
Jiri Baum <jiri@baum.com.au>
On the Internet, nobody knows if you are a @{[@{[open(0),<0>]}-1]}-line
perl script...

_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

On Wed Jan 12 07:24:48 2000 Simon Martin wrote...
>
>Hi all,
>
>Maybe this is a weird question, but what exactly IS a PLC, that is to say, what differences a PLC from any other piece of equipment that handles logic.<

My definition, for what it's worth:

1. Runs ladder logic (may run other languages)
2, Industrialy hardened hardware.
3. Talks to real physwical I/O (digital, analog, motion control)
4. Program can be viewd with statuses in real time.
5, I/O can be forced
6. Data table values can be changed in real time.
9. Program can be edited in real time.

Those are the ones that come to mind. Any others?

--
Stan Brown stanb@netcom.com 843-745-3154
Westvaco
Charleston SC.
--

_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

Simon said:
>Design take 3. Tanks for all the contributions. New bits:
>a) four tier construction (LinuxPLC, iomap, interface, driver).
>b) virtual interfaces (sotware modules) to generate
>counters/timers/etc, making the LinuxPLC engine
>itself as simple as possible.
>c) define the PLC as a state machine resolver.

I guess I'm uncomfortable with this model (as elaborated) for a number of reasons:

1. Any I/O model that relies upon the convention of scanning all inputs and/or updating all outputs as a monolithic function will not be general enough. For instance, some analog I/O with which I'm familiar take a finite (and significant) time to update, as do complex I/O such as motion controllers. If such things are treated as part of a "scan", the cycle time will become horrendous. If they're treated separately, with only the digital I/O being subject to a scan, we begin to create a divergent model of our I/O for which there is no real basis.

I would propose as an alternative that we make available methods/procedures for both (a.) updating the universe (or a subset thereof -- for instance, "all digital inputs") of I/O and (b.) updating a specific I/O point. This would allow a language engine to select its model of execution.

I know that the previously-proposed model could be read as providing these two mechanisms, but I'm a bit uncomfortable with the presumption of an I/O scan as part of the "normal" functionality.

2. As fond as I am of state programming for automation, I don't believe that *all* languages should be force-fit into a state paradigm. This feels a bit strained (e.g., calling an LD program a "single state" program <g>). The more we discuss this, the more I believe that a programming interface is the best "core" for our effort, and let the language execution engine take
whatever form fits best for the language.

If we do choose the direction I'm suggesting (and others have previously suggested), a good starting point might be to examine some of the services that a "typical" language execution engine might require. An attempt to generalize these services to the greatest degree possible now will pay dividends later in flexibility.

In short, I think there's little rationale for coding this the way one would code a conventional PLC, and a lot of reason to do it differently.

Ken Crater
Control.com Inc.
ken@control.com

_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

Re the problem of scanning both digital and (slow) analog I/O: I can't remember who did it, but some years ago there was a PLC with multiple scans, so more-critical stuff could be watched more closely.

Pete


-----Original Message-----
From: Ken Crater <ken@remote.control.com>

>Simon said:
>>Design take 3. Tanks for all the contributions. New bits:
>>a) four tier construction (LinuxPLC, iomap, interface, driver).
>>b) virtual interfaces (sotware modules) to generate
>>counters/timers/etc, making the LinuxPLC engine
>>itself as simple as possible.
>>c) define the PLC as a state machine resolver.
>
>I guess I'm uncomfortable with this model (as elaborated) for a number of reasons: ...<clip>

_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

By Mark Bayern on 17 January, 2000 - 1:13 am

Even more interesting is to go to the annual Chaos conference and spend time with Dick and the rest of the crowd -- it is an interesting bunch.
Lots of ideas about controlling processes. http://www.barn.org and look for conferences.

Mark

_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

By Jan Krabbenbos on 19 January, 2000 - 4:08 pm

Hi All,

For more information on Real-Time design with Linux, check out the Drops
site:
http://os.inf.tu-dresden.de/drops/

I've seen some papers there and I will download some to read.

--
Greetings,
Jan


Jan Krabbenbos
e-mail: jan.krabbenbos@wxs.nl
www : http://www.krabbenbos.com

_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

On Wed Jan 19 15:42:33 2000 Phil Covington wrote...
>
>Hello All,
>
>This is an incomplete and simplified overview of options for real time linux
>as it
>would apply to the LinuxPLC. Please see http://www.realtimelinux.org for
>links to
>much more detailed information.

Thanks very much for the summary.
>
>*Problems with Periodic Tasks in the Normal Linux Kernel*
>
>1. Linux has system calls to suspend a process for a given time period,
>but there is no guarantee that the process will be resumed as soon as
>this time interval has passed.
>
>2. Any user process can be pre-empted at an unpredictable moment.
>
>3. Assigning high priority to critical tasks doesn't help much since
>Linux has a "fair" time-sharing scheduling algorithm. See 5 below.
>
>4. Under Linux, virtual memory can be swapped out to disk at any time
>and swapping back into RAM takes an unpredictable amount of time. See 5
>below.

Give all the above, is it impossible, or unlikely that we can achieve _average_ scan times in the 10's of milliseconds without using one of the real time extensions?

>5. Linux now has POSIX style system calls for soft real-time tasks. Virtual
>memory can be locked in RAM, and the scheduler policy can be changed to a
>priority based policy.

If the answer to the above is yes, an we achieve these scan times by using the POSIX style extensions?

Here are my thoughts on this subject.

1. We must achieve scan times for the logic engines in the low tens of milliseconds for an average sized simple bit banging control program.

2. If this requires one of the real time extensions, then we must bite the bullet and put it in the base design.

3. If this is true, we should use one of the (2 ?) that use POSIX real time semantics, to allow for portability.

4. If however we can achieve the speed above, on average, realizing it's not deterministic, then I would suggest that the real time functionality be an add in feature, that we can compile without. I say this in the name of portability


Comments?


--
Stan Brown stanb@netcom.com 843-745-3154
Westvaco
Charleston SC.

_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

By Phil Covington on 19 January, 2000 - 10:07 pm

Hi Stan,

From: "Stan Brown" <stanb@awod.com>

<snip>
> >*Problems with Periodic Tasks in the Normal Linux Kernel*
> >
> >1. Linux has system calls to suspend a process for a given time period,
> >but there is no guarantee that the process will be resumed as soon as
> >this time interval has passed.
> >
> >2. Any user process can be pre-empted at an unpredictable moment.
> >
> >3. Assigning high priority to critical tasks doesn't help much since
> >Linux has a "fair" time-sharing scheduling algorithm. See 5 below.
> >
> >4. Under Linux, virtual memory can be swapped out to disk at any time
> >and swapping back into RAM takes an unpredictable amount of time. See 5
> >below.
>
> Give all the above, is it impossible, or unlikely that we can achieve
> _average_ scan times in the 10's of milliseconds without using one of
> the real time extensions?


With a process running in user space, it is not possible to achieve less than 20 ms scan times. Unfortunately the 20 ms scan time (periodic task)
would degrade seriously with system load - especially disk activity.


> >5. Linux now has POSIX style system calls for soft real-time tasks.
Virtual
> >memory can be locked in RAM, and the scheduler policy can be changed to a
> >priority
> >based policy.
>
> If the answer to the above is yes, an we achieve these scan times by
> using the POSIX style extensions?


The 20 ms scan time (periodic task) that I quoted above is assuming that we are using SCHED_FIFO and locked memory. This is absolutely best case. With increasing system load there will be increasing jitter in the scan time...
meaning the scan time won't be constant even though we are executing the exact same logic path in the Logic Engine (user process).

> Here are my thoughts on this subject.
>
> 1. We must achieve scan times for the logic engines in the low tens of
> milliseconds for an average sized simple bit banging control program.
>
> 2. If this requires one of the real time extensions, then we must bite
> the bullet and put it in the base design.
>
> 3. If this is true, we should use one of the (2 ?) that use POSIX
> real time semantics, to allow for portability.
>
> 4. If however we can achieve the speed above, on average, realizing
> it's not deterministic, then I would suggest that the real time
> functionality be an add in feature, that we can compile without. I say
> this in the name of portability
>
> Comments?

Some people may be able to live with scan times of 20+ ms and in that case the normal linux kernel will be fine.

For sub 20 ms scan times, then a real time extension is necessary. That leaves us right now with only two choices - RTLinux and KURT since they are the most mature and stable linux real time extensions. Either way it will require people to patch and re-compile their kernels or download a pre-patched kernel.

IMHO if we are to support RTLinux then the Logic Engine will have to be a real time module. It makes no sense to have the I/O drivers be real time modules and the Logic Engine a normal linux process. Because RTLinux runs normal Linux as a low priority task the userland Logic Engine will be subjected to even more unpredictable delays as discussed above. Additionally, RTLinux modules do not have access to the normal linux services - How could they? The whole linux kernel is being run as a lowest priority task by the RTLinux kernel! If you need sub millisecond period
tasks, such as a servo loop as in NIST's EMC project, then RTLinux is the only way to go.

The advantage that I see with KURT is that you can run a userland process (the Logic Engine) as a periodic real time process through the Process
RTMod. Whether the real time processes are kernel modules or userland processes they all have access to the linux kernels services. With KURT, scan times in the 1 - 10ms range should be possible with acceptable jitter as the system is loaded.

I wish too that we didn't have to do anything to the kernel to get sub 20 ms scan times. Writing kernel modules are much more dangerous than userland processes too.

At this point I am not advocating one real time extension over the other. This subject, I am sure, will start another round of arguments. I really like RTLinux because it doesn't make any changes to the normal linux kernel which I think is cleaner. I am attracted though to KURT because it allows the use of the linux kernel's services and allows real time userland
processes. This would give us more flexibility in the Logic Engine since it (the userland Logic Engine) could be real timed under KURT or just userland under the normal linux kernel for applications where scan times of 20ms+ are
acceptable. .Unfortunately both RTLinux and KURT can't co-exist... ;-(

Phil Covington
vHMI


_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc