Jump to content

smithd

Members
  • Posts

    763
  • Joined

  • Last visited

  • Days Won

    42

Everything posted by smithd

  1. Lack of dynamic UI generation or easy ways to compose UIs (xcontrols being excluded from the easy category in general). If we're being really simple, for example, you could theoretically concatenate two sections of HTML and the website will render the 'child' and 'parent' data more or less as expected. .Net has something similar with xaml and windows forms also allow pretty simple creation of controls if I'm not mistaken. So for example one of the methods could be "add your controls to this container in this pane" and poof you get a single interface with both sets of info. No that makes sense, I think I was mixing up two ways of doing the task. Way 1 is to have a DD method which returns another VI which is the user interface. Another would be to call the user interface directly (static method wrapping dynamic method) and it can insert itself into the subpanel or return a reference to its front panel, but as you said this wouldn't let you show the parent class UI, so you're stuck with option 1 I think. Since you're selecting which implementation you want yourself you can't use DD, you have to use static dispatch. That having been said, I don't think its too "not very unmaintainable" to have a different static method for each class called "MyClass A UI.vi" and "MyClass B UI.vi". Its less than ideal but its not that terrible. One other point that caught my eye. You're going to hit that same issue a number of times where you have code you want to run on windows that can't go on the cRIO. For example maybe you want to create a report or use report gen to read an excel table you're using to import configurations in bulk. I'd strongly recommend splitting each piece of functionality into at least two classes, probably three classes or two classes and a shared libary. Class 1 is windows only, UI only. Class 2 is cRIO only, RT only. Class 3 or the library is any shared type defs which are generated by class 1 and used to configure class 2. Its a pain to make so many but it also keeps your code separated by function -- theres no risk you break something on windows cause you included an RT-only method, and theres no risk you break your RT code because you included reportgen or some .net code. If you looked at the 3.0 release of CEF, this is why the "hierarchical" example has a UI class and a Config class that are different. Not shown is the fact that this was built hand in hand with the distributed control and automation framework (aka tag bus framework aka platypus) where every runtime plugin module is paired with 1 configuration class and N user interface classes. I can't stress enough how well this worked for us both by reducing the headaches and by forcing us to explicitly separate stuff that really only belongs in the UI from stuff that really only belongs in the runtime system.
  2. Minor style point: I think its nicer if each class just returns an array, and you build up that array in each successive child. That way if someone says "oh I don't want this so I'm not going to wire it up" it won't hurt as much. I was thinking the same as shoneill: The ACBR should call a static dispatch method and then the static dispatch method just calls the DD method. This works great for normal ACBR code but its more of a challenge because you also want to insert the VI into the subpanel...so all you'd get is the static, not the called DD. I suppose you *could* do something like what actor framework does. When you launch the VI you pass a queue (size 1, type vi refnum) into the static dispatch VI, which passes it to the DD vi, which immediately gets a reference to itself ("This VI" ref) and sends it back through the queue to the caller. On the one hand its kind of horrible, but on the other hand I think its safer than setting controls by name. Even if the child class screws up (and fails to return its ref through the queue for example) you can simply abort from the caller and go on with your life.
  3. I don't think lv is really going to provide an easy solution here. Mercer wrote this (http://forums.ni.com/t5/LabVIEW/An-experiment-in-creating-compositable-user-interfaces-for/td-p/1262378) a while back but... As to your specific code, I don't know that all those methods would work in an exe and its pretty fragile. It would be nicer to just give the parent a dynamic method for "get user interface(s)" and another DD VI to be the connector pane for those UIs, and you'd call them with start async call (rather than setting the FP elements by name). You might also take a closer look at the CEF (http://www.ni.com/example/51881/en/). I've used multiple view nodes which share the same data to do something like what you're showing.
  4. I like that you decided to simplify the RTU serial read...I was stubborn and wanted to follow the spec, I think that was a mistake. Serial APIs have come too far since RTU was created. The visa lock thing is neat, I didn't know that existed until recently. My only comment there is that if you went with a DVR for both you could put that in the parent class so you can't execute a query unless the parent lets you, which feels nicer to me. Probably not any real reason to do it. I also like that you somewhat simplified the class hierarchy. I can tell you right now I've never seen anyone make a new ADU or implement a new network protocol, so simplifying things on that front is great. The places I've seen extension are adding new function codes or changing slave behavior (which obviously doesn't apply here). A few other thoughts from that perspective: -I'd recommend pulling the PDU out of the ADU class and instead make the PDU a parameter of TX/RX and an output of RX. This is just a style thing of course but it makes the code easier to comprehend and for people adding new function codes its clearer where the data is going and what the library is doing with it. -I'd just make the PDU a typedef cluster. I went back and forth on this and ended up with the class, but I think that was a mistake. It doesn't need extension and doesn't need encapsulation, so.. -Pulling the individual 'build write...' and 'interpret response' functions out of the PDU class or library would, I think, make it feel like more of a level playing field for people adding new functionality. Like, it makes it clear that the function codes are happily mutable while in contrast you really shouldn't be touching "TX ADU". More importantly, it makes it less likely that someone adding a custom function code just adds it to your class and then saves it, making their version incompatible with the baseline.
  5. No, numerics, bools, strings, and arrays of the same. Doesn't sound like you're using it for an appropriate use case -- if every access is completely dynamic then yes there is no point to it vs a lookup. where it shines is neil's use where there are a fixed set of tags which you want to share globally but, being fixed, you can cache the lookups for. An equivalent but sometimes slower implementation would be using your variant repo where every contained value is a dvr or queue or notifier. The dvrs can be slower if, as in the opc ua situation, you want to access a large number of elements in one go.
  6. It wouldn't be hard to change the XNode to take a DVR of a variant rather than the variant itself. I did something similar so it would take an object and it was a lot easier than I thought it would be. For OPC UA in the past I've used the CVT and just made a small process which reads each OPC UA data value and copies it into the CVT. Then you can define groups for each of your tags and read data by group. NI made something very similar recently, although not as nice as this variant repo. The advantages though are that it has some metadata (last update timestamp, whether or not the value is valid, and then another variant for metadata): https://decibel.ni.com/content/docs/DOC-47108
  7. For the state part of the question: It also depends on how likely it is for someone to change settings from a different console. I've worked primarily on applications where this is a risk (HMI1 and HMI2 can both read from and change settings on DeviceA). As such, I tend to favor always always always storing the model on the device. HMI1 says "here is the settings I think you should have" and then DeviceA takes those settings and configures any attached devices appropriately. Ideally, you'd have some sort of subscription mechanism set up so that HMI2 knows HMI1 made a change, but... This changes entirely in situations where a device talks to 1 and only 1 master (ie a cRIO configuring an embedded sensor). In this case I think its perfectly justified to maintain a local memory on the cRIO if there are performance concerns. I'd still tend towards letting the sensor maintain its own state, but for values which are accessed frequently caching makes sense. For the energy meter situation, I'm guessing you configure it once on boot before you start running the system and then rarely touch it again. If that guess is right, I wouldn't bother with caching -- just always send the new cfg to the device. However, I do think there is a design question in there (item 1): How does your application distinguish between sensor types? If you have one big cluster with settings A,B,C,D,E, and F, and some sensors ignore F while others ignore C, you're left with a few questions: -How advanced is your user? If you just ignore errors for settings that don't exist, will the user understand why the setting wasn't implemented as they expect? If so, I'd just send everything and let the sensor sort it out. -If the user isn't advanced, how do you display different configuration screens to them...because if you have different configuration screens, then your application could simply not send settings that don't make sense for that sensor.
  8. Probably not, its designed to handle the situation that sometimes came up prior to the RT-cDAQ where you needed a RT processor but didn't care about the FPGA. The goal was to make a simple daqmx-like API for that specific use case. That having been said, its so simple that I've used it as a starting point for any streaming acquisition from the FPGA, for example you could be processing data, and then wanting to stream up the results. So in the acquisition loop instead of reading directly from I/O you could read from a target-scoped fifo or a local register. Its true value to me, now that the RT-cRIO is out, is (a) as a teaching tool, and (b) as a template which reminds me of all the stuff I might need, like having an idle state on the FPGA, acquisition stopping and restarting, checking for timeouts, etc. If you have your own template or pattern you start from, or your projects are different enough that such a tool doesn't make sense, then the reference library doesn't make sense. The page itself also has some great benchmarks for network and disk streaming which i reference regularly.
  9. I'd recommend just using this: http://www.ni.com/example/31206/en/ It attempts to simplify the problem. If you want to understand it better, make sure you take a look at this: http://www.ni.com/pdf/products/us/criodevguidesec3.pdf starting pg 14 ("90" in the PDF)
  10. Theres a bunch of those, to the point that I don't find those properties in the help very trustworthy. I believe the explanation is that it can work on RT in interactive mode with the front panel open but not in an exe form? It will definitely work on the linux crios with displayports, as you have to check a little box which tells labview to compile it with the UI components. You can use an invoke node on the VI, I think (control value.get all) but then you lose the signalling property nodes.
  11. Something else I would throw in: If you have an object with a reference in it of any kind, I think it really pays off to have a explicit "allocate" or "connect" or whatever function which converts your object from a "dead" object (ie its just a cluster) to a "living" one, whose state has to be carefully managed. I would also recommend it be different from your "construct" function. A specific pattern I've found to be successful is similar to what manudelavega suggested. You have a "configuration" class and an execution class. You can drop down your config class, change the settings that make sense, and then initialize your runtime class using that configuration class. If you want to get fancy you can load into that configuration class handy things like saving to disk or specifying what specific runtime class it needs to be loaded into, and so from your user's perspective theres really just the one class that happens to metamorphose through its life cycle.
  12. Ah yeah, thats what I meant, a scrolling string rather than a status bar. Thats interesting, I think because I do more RT stuff I tend towards thinking of RT as 'the important thing' and it does all this detailed error logging and recovery while the UI is more...I guess friendly? So if the user does something wrong its not really an error its just a mistake, so the scrolling log trains people how to not do that. Your background color thing is I think along the same lines and I like that idea. I've done something like that once or twice when I felt fancy (pop in a hidden red X next to an error control in my case, but same concept). It does feel like that would get tedious after a while. Do you have good system for setting that up or do you just bite the bullet and do it manually, one by one? Have any of you ever actually gone all the way and said goodbye to the standard labview error cluster? Or is it just too ubiquitous to get around?
  13. lol No really, I didn't know that. Thats kind of funny. In general I try to use the simple error handler only on exit, and even then its just a bad habit -- nobody ever ever ever ever reads the dialog. You're better off saving it to a log in a folder called "delete me" on the user's desktop . I try to handle errors gracefully and then report the failure in a status log on the bottom of the window (or in the temp dir). That might not be perfect, but at least it keeps a log so people can go back and see "oh man I got that error thats why that operation failed" before completely ignoring the rest of the error message and code and moving on with their lives. The other thing I find for myself is that I loathe people who say "simple error handler is bad, lets use 1 button dialog". Its great the first time as it gives you immediate blocking feedback on what you did wrong. And then every subsequent time you see it you want to kill the developer. Thats another reason I find the status bar to be the least worst option. The user tries to do something silly like making a new test with no name, you put a message in the status bar, they say "woops" and then they don't blame the code (ok they don't blame it as much). Has anyone else found a better way of gracefully showing errors in a non-obnoxious way?
  14. Another neat option depending on the platform (ok everything except vxworks and pharlap) is to use a traditional web service (apache, etc) to host public content and proxy requests to a given url pattern (like /myLVApp/*) over to the LabVIEW web service. That way static http pages are served up in the normal way by a standard, generally pretty fast web server and anything you need to run code for can still be handled by your labview service. I kind of vaguely describe this over here: https://lavag.org/topic/19260-dynamic-content-in-other-web-servers/ I'm sure Neil's way works well but if this option sounds interesting to you I can probably help out a bit or at least get you going in the right direction.
  15. if you have the bitfile opened using the normal FPGA open, it should be compiled into the rtexe. It should also work if you are referencing it by path. The error you posted relates to flashing it on the FPGA, which is only necessary if you want the FPGA to boot before your exe does.
  16. Michael did you ever figure out a solution? I was just trying to figure this out today and found a bunch of threads from a decade ago about this, but haven't found anyone with a solution to the problem. The silly solution I've come up with is to use pt->row on coordinates +/- 5 pixels and if the tag changes then it must be on that border...but I don't like this idea. And this only really works for me because (right now) only one type of element in the tree is allowed to accept children, so I just need to check for that and can either drop or insert based on that property.
  17. what are your workarounds? the thing that comes to mind is always do var->lvobj first, then lvobj->child class, but thats could certainly get ugly. Where specifically is this coming up in the code?
  18. This makes sense if you break them both down: Top option: -Load int, load int (likely pre-loaded since I'm assuming thats the point of letting labview do autoindexing, so it should be fast) -Add int, int -Convert someIntType->int (likely free) -Add int, int -Convert result to I32 (maybe free) -Bounds check on index -Random access from memory at index ...still waiting ...still waiting... ...still waiting... ...result -Store result in output array Bottom Option: -Load int, int (fast) -Or int, int -And int, int -Move result to output array Some of it happens in parallel but there is a wire wrapping around from the end of the top loop to the start of the bottom.
  19. I dont have the code in front of me so forgive me if I'm off a bit, but I think there are two options here and I'd like to be sure which you're proposing: -To Json which produces a raw string -To json which produces a json.lvlib object, which the library then flattens in the usual way into a string. I'd prefer the second option myself...is that what you're going for? Yeah. Its probably OK on desktop but its realllly slow on RT and probably not suitable for any code which is run regularly.
  20. Also keep in mind providing random values to a case structure means a lot of conditional jumps that the processor gets wrong. You're trading the execution time of an AND or an OR for the execution time of finding the next operation to call in memory and potentially getting the wrong one and having to back up. I'm actually surprised the case structure isn't significantly worse.
  21. As hoovah and mark said you probably want something more like the standalone cDAQ-9132 which has an RT controller but still uses daqmx (its $1k more though, because the cDAQ doesn't come in a "value" line -- it does come with the monitor out, x64 linux, and a much faster processor so it may be worth the cash anyway). If you actually do need the FPGA because of timing, processing, or specific I/O needs then it is absolutely possible to simulate the FPGA and RT code, something which has gotten steadily better over the years. Is it seamless, no, but it works well enough. http://www.ni.com/white-paper/51859/en/
  22. from what I can tell, people like to use that to make hard to read code such that a particular function (with side effects) will or won't get called depending on earlier boolean expressions: https://en.wikipedia.org/wiki/Short-circuit_evaluation Either way, its more realistic to say that the behavior is defined like this rather than that the compiler does some special optimization. labview's compiler simply doesn't do the same: "The short-circuit expression x Sand y (using Sand to denote the short-circuit variety) is equivalent to the conditional expression if x then y else false; the expression x Sor y is equivalent to if x then true else y."
  23. Thanks In the other thread I mentioned I used this to make a quick demo for freeboard using the CVT. I've attached the demo here. To use it: 1. Unzip the attached freeboard demo.zip 2. Install the CVT (current value table) from VI Package manager 3. Download https://caddyserver.com/download and extract to the same folder as the zip. Caddy.exe and "caddyfile" should be in the same directory. 4. Optional: switch up the ports in "caddyfile". 5. Open zip/lv/sampleweb.lvproj, then open server.vi 6. Optional: switch up the ports 7. Press run 8. Run caddy.exe 9. Go to localhost in your browser. 10. At the top, press "Load Freeboard", then select zip/lv/http and ws.json 11. Change values on the front panel of server.vi. There are 2 sets of data sources, one set uses websockets and the other uses http requests. Both provide the same data which looks like this: { "humidity": 2.377959, "temp": 0.000000, "label": "jkjk", "set": [1], "onoff": true } Note: the code is not great, just something I threw together really fast with pieces of the CVT web library on ni.com/referencedesigns. However one nice thing is you can see that creating a web service like this requires 2 functions, "get" and an initialization helper. The other nice thing is that the web service didn't start until after my application did, so I know exactly what the state of my system is when the ws starts (vs deploying from the project and then initializing later which is what CVT web has to do). freeboard demo.zip
  24. Depends on what exactly is taking 1 second. If its the conversion from cluster to json string, well that library, if I remember correctly, is on the slow side. I'd recommend this one: https://lavag.org/files/file/216-json-labview/ or of course the built-in flatten to json which is even faster.
  25. I've occasionally seen or heard from people wondering about hosting labview web services in other servers, like apache, microsoft IIS, or nginx. They may already have a server they just want to plug into, or maybe they want to use some of the more advanced features available in other servers -- for example LDAP/activedirectory authentication (http://httpd.apache.org/docs/2.2/mod/mod_authnz_ldap.html). However dynamic content still needs to be provided somehow, from LabVIEW. I've never really done too much with web services and I was curious how this stuff works in other languages. I'm fairly certain I'm 'rediscovering' some of this, but I couldn't find what I was looking for anywhere in labview-land....so I thought I'd share what I found out and what I did about it in case anyone else was curious about the same. What seems to be the case is that when people using other languages want to add code to the back end of a web server, they have a few basic options: Compile code into the web server. This option is more difficult to reuse, as you're making a plugin for a particular server, but some servers can use plugins for other servers. Apache has modules, for example mod_perl which is basically a dll which runs perl. IIS can run all sorts of .net code and scripts I *think* this is what LabVIEW has done for its web service, but can't confirm. Use a protocol between the front-end web server and the back-end application. The simplest version is CGI -- the web server runs an exe, passes any post data as standard in, and waits for data on standard out or standard error. This is slooooow as it runs an exe every time. This was improved with FastCGI where the web server launches N copies of the exe on boot and then leaves them running. These EXEs are sent packets which correspond to the CGI standard in data and respond with standard out and/or standard error packets. These packets can be sent through standard I/O (which is tough with lv) or through TCP (which is easy). Some people use a simpler HTTP server running in their program as the back-end. It can be simpler and less robust than, say, apache because in theory you're only getting well-formed packets from the front-end http server. Compared to FastCGI this can be even faster because the HTTP request doesn't have to be re-packaged into an FCGI request, but HTTP is single-threaded (for lack of a better term -- requests must be responded to in the order they were received). The other advantage is you can use a web browser or other http client to talk directly to the simple http server, which you can't do with FCGI (for debugging purposes). This is essentially a https://en.wikipedia.org/wiki/Reverse_proxy There are some custom protocols used like WSGI which seems to be python-only, or things like java applets. The point is, its pretty common to let something big and complicated like apache take care of the incoming requests on port 80 while letting one or more back-end servers handle specific requests as needed. This could handled by the standard labview web services (ie Apache reverse-proxies to the standard labview web service) but I was still interested in how this stuff works so I made some (rough) code, which I've posted here, a few weeks back: https://github.com/smithed/LVWebTools tl;dr: Built, monolithic web-service-code like what labview does seems to be pretty uncommon. Most (citation needed) languages seem to just use a protocol between the main, very separate web server responsible for authentication, security, and static file serving and the tiny backend exe responsible for everything else. I made some neat tools to do this, in LabVIEW. Also, the internet really is just a series of tubes. ----------------------------------------------------------------------------------------------------------- Note: the following is really into some of the weeds, if you don't want to try the code stop here. I started with FastCGI because it seemed simpler...but it turned out not to be. You're still responsible for adding most of the headers and response codes and such into your response, so it seemed like it had less structure but that structure was actually necessary anyway most of the time. If you want to take a look at it, FCGI/FCGI server.vi is the main VI and FCGI/default responder action.vi is the main VI I was using to test out different responses. With apache, you'd set it up using mod_proxy and mod_proxy_fcgi, but I'd actually recommend a new web server called Caddy (http://caddyserver.com/) because it is more developer friendly. The "caddyfile" you'd need would be as follows: "0.0.0.0:80 fastcgi /lvroot 127.0.0.1:9000" (adjusting the ports and such as needed of course). If you run the caddy server with that file, and then navigate to http://localhost/lvroot/anything it will invoke the labview code. With fastcgi, you manually add any headers you need to the top of the response, so as an example you'd have to write: "Content-Type: text/html <http><body>blah</body></http>" After that I decided to try out http, which I quickly learned was a really gross protocol. The core of it is nice and simple, but the issue to me is that the transport-related headers are mixed in with content-related headers, and oh by the way you can shove extra headers in the content section if you want to, oh and hey if you want to change protocols in the middle of operation thats supported too. Its weird. But I got a verrrrry basic server loop up and running and its located in http/http server.vi. This one I actually used for something (the Freeboard thing I mentioned in this thread: https://lavag.org/topic/19254-interactive-webpage-vi-options/) so I made a basic class to inject some behavior. That class is located in http/http responder and provides the "get.vi", "post.vi", "put.vi", and "delete.vi" you'd expect. Since I was 90% of the way there anyway, I added a protocol upgrade function in order to pass functionality off to (for example) a websocket handler. This was totally not worth it (the code got more complex), but its cool that it works. As above, I'd recommend caddy server and the appropriate line you'd want to add to your file is "proxy /lvhttp localhost:9001" Because HTTP allows sending partial results, my implementation uses a queue to send data back...I think most of the fields are pretty obvious, except to note that the headers field is a variant attribute look up table with a string/string implementation (header/value). If a response code isn't specified, 200 is assumed. Side note: Because I have a giant NI next to my name I feel the need to note that this is just a fun project, not intended in any way to replace the fully-supported in-product server which includes all the fancy things like authentication, URL routing, etc. My thought was that this could be handy for making really stupidly simple services that plug into big ones *already running* in apache, nginx, or IIS.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.