Jump to content

smithd

Members
  • Posts

    763
  • Joined

  • Last visited

  • Days Won

    42

Posts posted by smithd

  1. What do you have in mind as an alternative to LV?  What prevents it from being a good fit?

     

    Lack of dynamic UI generation or easy ways to compose UIs (xcontrols being excluded from the easy category in general). If we're being really simple, for example, you could theoretically concatenate two sections of HTML and the website will render the 'child' and 'parent' data more or less as expected. .Net has something similar with xaml and windows forms also allow pretty simple creation of controls if I'm not mistaken. So for example one of the methods could be "add your controls to this container in this pane" and poof you get a single interface with both sets of info.

     

    To restate my issue: If you have a static dispatch wrapper around the DD method, there is no way to allow the user to select which implementation of the class to use; the LabVIEW will select the implementation associated with the object input.  If you then use the Call Parent Method node in the child implementation to access the parent's implementation, I can't see a way to insert the parent UI implementation into the subpanel.  

     

    To be fair, I have found one way to do this, by passing the subpanel reference as an input and allowing the method to insert itself.  However, I am using these classes on an RT system, and including the subpanel reference in the class will cause it to fail during deployment.  Therefore, I am looking for a way to do this without having the method insert itself.

    No that makes sense, I think I was mixing up two ways of doing the task. Way 1 is to have a DD method which returns another VI which is the user interface. Another would be to call the user interface directly (static method wrapping dynamic method) and it can insert itself into the subpanel or return a reference to its front panel, but as you said this wouldn't let you show the parent class UI, so you're stuck with option 1 I think. Since you're selecting which implementation you want yourself you can't use DD, you have to use static dispatch. That having been said, I don't think its too "not very unmaintainable" to have a different static method for each class called "MyClass A UI.vi" and "MyClass B UI.vi". Its less than ideal but its not that terrible.

     

    One other point that caught my eye. You're going to hit that same issue a number of times where you have code you want to run on windows that can't go on the cRIO. For example maybe you want to create a report or use report gen to read an excel table you're using to import configurations in bulk. I'd strongly recommend splitting each piece of functionality into at least two classes, probably three classes or two classes and a shared libary. Class 1 is windows only, UI only. Class 2 is cRIO only, RT only. Class 3 or the library is any shared type defs which are generated by class 1 and used to configure class 2. Its a pain to make so many but it also keeps your code separated by function -- theres no risk you break something on windows cause you included an RT-only method, and theres no risk you break your RT code because you included reportgen or some .net code. If you looked at the 3.0 release of CEF, this is why the "hierarchical" example has a UI class and a Config class that are different. Not shown is the fact that this was built hand in hand with the distributed control and automation framework (aka tag bus framework aka platypus) where every runtime plugin module is paired with 1 configuration class and N user interface classes. I can't stress enough how well this worked for us both by reducing the headaches and by forcing us to explicitly separate stuff that really only belongs in the UI from stuff that really only belongs in the runtime system.

  2. (New code attached, run "Launch UI 2.vi")

     

    Alright, let's try this again.  First off, I modified the original example to obtain it's references via the Call Parent Method, thanks to the suggestion from shoneill.  Each implementation contains a static reference to its own implementation of the UI, and adds it to the array.  This definitely seems way less brittle/fragile, and will scale easily.  Thanks shoneill!

     

    juM5M8S.png9ALkP4B.png

     

     

    smithd, I went ahead and set it up with ACBR to show you what I am talking about.  It definitely looks nicer, but since the terminals of the DD UI VI are dynamic, the VI is broken.  Any thoughts on getting around this, aside from setting the control values by name?

     

     

     

    Minor style point: I think its nicer if each class just returns an array, and you build up that array in each successive child. That way if someone says "oh I don't want this so I'm not going to wire it up" it won't hurt as much.

     

    Normally, this would be my go to as well, but this particular design restricts that option, see below

    • Wrap each class' implementation in a static vi.
      • You cannot have VIs in child/parent classes with the same name unless they are DD.  Therefore, I would have to create and name each UI differently, which is not very maintainable.
    • Wrap the DD vi in a static VI at the parent level only
      • This would mean that it would no longer be possible to call the specific implementation that I want.  This option is great for the general case of calling a DD vi with an ACBR node, and I use it in several places already.

    This line of thinking is why I was wondering if this breaks OO design rules, although I am not well versed enough to know why.  Essentially, I am looking to call a DD vi statically in special cases.  Any other thoughts?  I have implemented the set control value code into my framework, and it seems to be working alright (if not a little slow) so far.

     

    Thanks,

    I was thinking the same as shoneill: The ACBR should call a static dispatch method and then the static dispatch method just calls the DD method. This works great for normal ACBR code but its more of a challenge because you also want to insert the VI into the subpanel...so all you'd get is the static, not the called DD. I suppose you *could* do something like what actor framework does. When you launch the VI you pass a queue (size 1, type vi refnum) into the static dispatch VI, which passes it to the DD vi, which immediately gets a reference to itself ("This VI" ref) and sends it back through the queue to the caller. On the one hand its kind of horrible, but on the other hand I think its safer than setting controls by name. Even if the child class screws up (and fails to return its ref through the queue for example) you can simply abort from the caller and go on with your life.

  3. I don't think lv is really going to provide an easy solution here. Mercer wrote this (http://forums.ni.com/t5/LabVIEW/An-experiment-in-creating-compositable-user-interfaces-for/td-p/1262378) a while back but...

     

    As to your specific code, I don't know that all those methods would work in an exe and its pretty fragile. It would be nicer to just give the parent a dynamic method for "get user interface(s)" and another DD VI to be the connector pane for those UIs, and you'd call them with start async call (rather than setting the FP elements by name).

    You might also take a closer look at the CEF (http://www.ni.com/example/51881/en/). I've used multiple view nodes which share the same data to do something like what you're showing.

  4. I like that you decided to simplify the RTU serial read...I was stubborn and wanted to follow the spec, I think that was a mistake. Serial APIs have come too far since RTU was created.

     

    The visa lock thing is neat, I didn't know that existed until recently. My only comment there is that if you went with a DVR for both you could put that in the parent class so you can't execute a query unless the parent lets you, which feels nicer to me. Probably not any real reason to do it.

     

    I also like that you somewhat simplified the class hierarchy. I can tell you right now I've never seen anyone make a new ADU or implement a new network protocol, so simplifying things on that front is great. The places I've seen extension are adding new function codes or changing slave behavior (which obviously doesn't apply here).

     

    A few other thoughts from that perspective:

    -I'd recommend pulling the PDU out of the ADU class and instead make the PDU a parameter of TX/RX and an output of RX. This is just a style thing of course but it makes the code easier to comprehend and for people adding new function codes its clearer where the data is going and what the library is doing with it.

    -I'd just make the PDU a typedef cluster. I went back and forth on this and ended up with the class, but I think that was a mistake. It doesn't need extension and doesn't need encapsulation, so..

    -Pulling the individual 'build write...'  and 'interpret response' functions out of the PDU class or library would, I think, make it feel like more of a level playing field for people adding new functionality. Like, it makes it clear that the function codes are happily mutable while in contrast you really shouldn't be touching "TX ADU". More importantly, it makes it less likely that someone adding a custom function code just adds it to your class and then saves it, making their version incompatible with the baseline.

    • Like 1
  5. One of my issues with the CVT is that as it stands everything is a double right?

    No, numerics, bools, strings, and arrays of the same.

     

    Yeah I have other issues with the CVT, like performance.  And when writing a messaging library that is uses a wrapper constantly for variant data, I wanted it to be light weight, and generally I didn't find a wrapper for a look up table that wasn't better when it comes to performance than Variant Attributes.

    Doesn't sound like you're using it for an appropriate use case -- if every access is completely dynamic then yes there is no point to it vs a lookup. where it shines is neil's use where there are a fixed set of tags which you want to share globally but, being fixed, you can cache the lookups for. An equivalent but sometimes slower implementation would be using your variant repo where every contained value is a dvr or queue or notifier. The dvrs can be slower if, as in the opc ua situation, you want to access a large number of elements in one go. 

  6. It wouldn't be hard to change the XNode to take a DVR of a variant rather than the variant itself. I did something similar so it would take an object and it was a lot easier than I thought it would be.

     

    For OPC UA in the past I've used the CVT and just made a small process which reads each OPC UA data value and copies it into the CVT. Then you can define groups for each of your tags and read data by group.

     

    NI made something very similar recently, although not as nice as this variant repo. The advantages though are that it has some metadata (last update timestamp, whether or not the value is valid, and then another variant for metadata):

    https://decibel.ni.com/content/docs/DOC-47108

  7. For the state part of the question:

    It also depends on how likely it is for someone to change settings from a different console. I've worked primarily on applications where this is a risk (HMI1 and HMI2 can both read from and change settings on DeviceA). As such, I tend to favor always always always storing the model on the device. HMI1 says "here is the settings I think you should have" and then DeviceA takes those settings and configures any attached devices appropriately. Ideally, you'd have some sort of subscription mechanism set up so that HMI2 knows HMI1 made a change, but...

     

    This changes entirely in situations where a device talks to 1 and only 1 master (ie a cRIO configuring an embedded sensor). In this case I think its perfectly justified to maintain a local memory on the cRIO if there are performance concerns. I'd still tend towards letting the sensor maintain its own state, but for values which are accessed frequently caching makes sense.

     

    For the energy meter situation, I'm guessing you configure it once on boot before you start running the system and then rarely touch it again. If that guess is right, I wouldn't bother with caching -- just always send the new cfg to the device.

     

    However, I do think there is a design question in there (item 1):

    How does your application distinguish between sensor types? If you have one big cluster with settings A,B,C,D,E, and F, and some sensors ignore F while others ignore C, you're left with a few questions:

    -How advanced is your user? If you just ignore errors for settings that don't exist, will the user understand why the setting wasn't implemented as they expect? If so, I'd just send everything and let the sensor sort it out.

    -If the user isn't advanced, how do you display different configuration screens to them...because if you have different configuration screens, then your application could simply not send settings that don't make sense for that sensor.

  8. Probably not, its designed to handle the situation that sometimes came up prior to the RT-cDAQ where you needed a RT processor but didn't care about the FPGA. The goal was to make a simple daqmx-like API for that specific use case.

     

    That having been said, its so simple that I've used it as a starting point for any streaming acquisition from the FPGA, for example you could be processing data, and then wanting to stream up the results. So in the acquisition loop instead of reading directly from I/O you could read from a target-scoped fifo or a local register.

     

    Its true value to me, now that the RT-cRIO is out, is (a) as a teaching tool, and (b) as a template which reminds me of all the stuff I might need, like having an idle state on the FPGA, acquisition stopping and restarting, checking for timeouts, etc. If you have your own template or pattern you start from, or your projects are different enough that such a tool doesn't make sense, then the reference library doesn't make sense. The page itself also has some great benchmarks for network and disk streaming which i reference regularly.

  9. Okay I think I found the issue, of which I'd be curious to hear if NI has anything to say about it.  The issue is with getting the references to the controls and indicators.  On RT there is no front panel, so getting references to those controls isn't possible.  But the normal way you would do this is to use the Front Panel Property on the VI.  Reading the help is says it is available in Real-Time OS...but there is no FP in Real-Time...so it returns an error...so is this really available if it will always return an error?  Seems like these two things don't agree.

    Theres a bunch of those, to the point that I don't find those properties in the help very trustworthy. I believe the explanation is that it can work on RT in interactive mode with the front panel open but not in an exe form? 

     

    It will definitely work on the linux crios with displayports, as you have to check a little box which tells labview to compile it with the UI components.

     

    You can use an invoke node on the VI, I think (control value.get all) but then you lose the signalling property nodes.

  10. Something else I would throw in:

    If you have an object with a reference in it of any kind, I think it really pays off to have a explicit "allocate" or "connect" or whatever function which converts your object from a "dead" object (ie its just a cluster) to a "living" one, whose state has to be carefully managed. I would also recommend it be different from your "construct" function.

     

    A specific pattern I've found to be successful is similar to what manudelavega suggested. You have a "configuration" class and an execution class. You can drop down your config class, change the settings that make sense, and then initialize your runtime class using that configuration class. If you want to get fancy you can load into that configuration class handy things like saving to disk or specifying what specific runtime class it needs to be loaded into, and so from your user's perspective theres really just the one class that happens to metamorphose through its life cycle.

  11. I have my own error handler loop in every app. This loop polls a queue for new error messages, displays them in an automatic scrolling string control (that in some applications is optional), logs the error to a log file (and at the same tome looks for really old log files that can be deleted) and also optionally shows a floating but non modal dialog.

     

    Ah yeah, thats what I meant, a scrolling string rather than a status bar.

     

    I'm not a fan of the status bar for errors just as I'm not a fan of the status bar for mouse-over help. A one liner is not enough for users. They really need a plain [insert your language here] error message and an explanation of how to proceed. From a personal perspective, It just confuses me between something that's nice to know (which I associate with status bars so don't look often) and something I must know.

     

    For settings, I am a fan of the browser style of flagging errors. I.e change the background colour of the item and a message saying "sort that out!" :P

    Thats interesting, I think because I do more RT stuff I tend towards thinking of RT as 'the important thing' and it does all this detailed error logging and recovery while the UI is more...I guess friendly? So if the user does something wrong its not really an error its just a mistake, so the scrolling log trains people how to not do that. Your background color thing is I think along the same lines and I like that idea. I've done something like that once or twice when I felt fancy (pop in a hidden red X next to an error control in my case, but same concept). It does feel like that would get tedious after a while. Do you have good system for setting that up or do you just bite the bullet and do it manually, one by one?

     

     

    Have any of you ever actually gone all the way and said goodbye to the standard labview error cluster? Or is it just too ubiquitous to get around?

  12. lol :D No really, I didn't know that. Thats kind of funny.

     

    In general I try to use the simple error handler only on exit, and even then its just a bad habit -- nobody ever ever ever ever reads the dialog. You're better off saving it to a log in a folder called "delete me" on the user's desktop ;). I try to handle errors gracefully and then report the failure in a status log on the bottom of the window (or in the temp dir). That might not be perfect, but at least it keeps a log so people can go back and see "oh man I got that error thats why that operation failed" before completely ignoring the rest of the error message and code and moving on with their lives.

     

    The other thing I find for myself is that I loathe people who say "simple error handler is bad, lets use 1 button dialog". Its great the first time as it gives you immediate blocking feedback on what you did wrong. And then every subsequent time you see it you want to kill the developer. Thats another reason I find the status bar to be the least worst option. The user tries to do something silly like making a new test with no name, you put a message in the status bar, they say "woops" and then they don't blame the code (ok they don't blame it as much). 

     

    Has anyone else found a better way of gracefully showing errors in a non-obnoxious way?

  13. Another neat option depending on the platform (ok everything except vxworks and pharlap) is to use a traditional web service (apache, etc) to host public content and proxy requests to a given url pattern (like /myLVApp/*) over to the LabVIEW web service. That way static http pages are served up in the normal way by a standard, generally pretty fast web server and anything you need to run code for can still be handled by your labview service.

    I kind of vaguely describe this over here: https://lavag.org/topic/19260-dynamic-content-in-other-web-servers/

    I'm sure Neil's way works well but if this option sounds interesting to you I can probably help out a bit or at least get you going in the right direction.

  14. Michael did you ever figure out a solution? I was just trying to figure this out today and found a bunch of threads from a decade ago about this, but haven't found anyone with a solution to the problem. The silly solution I've come up with is to use pt->row on coordinates +/- 5 pixels and if the tag changes then it must be on that border...but I don't like this idea. And this only really works for me because (right now) only one type of element in the tree is allowed to accept children, so I just need to check for that and can either drop or insert based on that property.

  15. By the way, You can use a Lookup table / Karnaugh diagram if you start with Byte's instead of Boolean's. Still slower than the logic (0.027  versus 0.022) .

    This makes sense if you break them both down:

     

    Top option:

    -Load int, load int (likely pre-loaded since I'm assuming thats the point of letting labview do autoindexing, so it should be fast)

    -Add int, int

    -Convert someIntType->int (likely free)

    -Add int, int

    -Convert result to I32 (maybe free)

    -Bounds check on index

    -Random access from memory at index

    ...still waiting

    ...still waiting...

    ...still waiting...

    ...result

    -Store result in output array

     

    Bottom Option:

    -Load int, int (fast)

    -Or int, int

    -And int, int

    -Move result to output array

     

    The way this is coded these two processes happen in parallel.  If you are on a single core you may get inconsistent results.  Also with UI's being updated asynchronously, if one finished, before the other (which we expect) then your timing won't be consistent because your computer will be off updating the UI and may take away cycle counts from the other process.  Basically the timing information from that VI can't be trusted, but the fact that the numbers are so small means it probably doesn't matter anyway.  

     

    Also can't that bottom for loop be removed all together?  AND and OR work on arrays as well as scalars.

    Some of it happens in parallel but there is a wire wrapping around from the end of the top loop to the start of the bottom.

  16.  

    Note: I’m thinking of putting in an abstract parent class with “To JSON†and “From JSON†methods, but I’m not going to specify how exactly objects will be serialized.  So Charles can override with his implementation (without modifying JSON.lvlib) without restricting others from creating classes than can be generated from different JSON schema.   I could imagine use cases where one is generating objects from JSON that is created by a non-LabVIEW program that one cannot modify, for example.   Or one just wants a different format; here’s a list of objects from one of my projects:

    I dont have the code in front of me so forgive me if I'm off a bit, but I think there are two options here and I'd like to be sure which you're proposing:

    -To Json which produces a raw string

    -To json which produces a json.lvlib object, which the library then flattens in the usual way into a string.

    I'd prefer the second option myself...is that what you're going for?

     

     

    I haven't looked at the Character Lineator, but my impression from forum chatter is that it's a monster that runs slowly. Is it worth use in production software these days?

    Yeah. Its probably OK on desktop but its realllly slow on RT and probably not suitable for any code which is run regularly. 

  17. As hoovah and mark said you probably want something more like the standalone cDAQ-9132 which has an RT controller but still uses daqmx (its $1k more though, because the cDAQ doesn't come in a "value" line -- it does come with the monitor out, x64 linux, and a much faster processor so it may be worth the cash anyway).

     

    If you actually do need the FPGA because of timing, processing, or specific I/O needs then it is absolutely possible to simulate the FPGA and RT code, something which has gotten steadily better over the years. Is it seamless, no, but it works well enough.

    http://www.ni.com/white-paper/51859/en/

  18. from what I can tell, people like to use that to make hard to read code such that a particular function (with side effects) will or won't get called depending on earlier boolean expressions:

    https://en.wikipedia.org/wiki/Short-circuit_evaluation

     

    Either way, its more realistic to say that the behavior is defined like this rather than that the compiler does some special optimization. labview's compiler simply doesn't do the same:

    "The short-circuit expression x Sand y (using Sand to denote the short-circuit variety) is equivalent to the conditional expression if x then y else false; the expression x Sor y is equivalent to if x then true else y."

  19. Thanks :)

     

    In the other thread I mentioned I used this to make a quick demo for freeboard using the CVT. I've attached the demo here.

    To use it:

    1. Unzip the attached freeboard demo.zip

    2. Install the CVT (current value table) from VI Package manager

    3. Download https://caddyserver.com/download and extract to the same folder as the zip. Caddy.exe and "caddyfile" should be in the same directory.

    4. Optional: switch up the ports in "caddyfile".

    5. Open zip/lv/sampleweb.lvproj, then open server.vi

    6. Optional: switch up the ports

    7. Press run

    8. Run caddy.exe

    9. Go to localhost in your browser.

    10. At the top, press "Load Freeboard", then select zip/lv/http and ws.json

    11. Change values on the front panel of server.vi.

     

    There are 2 sets of data sources, one set uses websockets and the other uses http requests. Both provide the same data which looks like this:

    {
    "humidity": 2.377959,
    "temp": 0.000000,
    "label": "jkjk",
    "set": [1],
    "onoff": true
    }
     
    Note: the code is not great, just something I threw together really fast with pieces of the CVT web library on ni.com/referencedesigns. However one nice thing is you can see that creating a web service like this requires 2 functions, "get" and an initialization helper. The other nice thing is that the web service didn't start until after my application did, so I know exactly what the state of my system is when the ws starts (vs deploying from the project and then initializing later which is what CVT web has to do).

    freeboard demo.zip

  20. Hi guys!

     

    I using i3 json pallet to transform a big cluster into json. The cluster has a lot of information and one of them is an array with 600 position. The cluster's size is approx. 6kb, which isnt too much.

     

    When im transfering, it takes approx. 1 sec to transfer. What should i do?

     

    Depends on what exactly is taking 1 second. If its the conversion from cluster to json string, well that library, if I remember correctly, is on the slow side. I'd recommend this one:

    https://lavag.org/files/file/216-json-labview/

    or of course the built-in flatten to json which is even faster.

    • Like 1
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.