Jump to content

Mads

Members
  • Posts

    437
  • Joined

  • Last visited

  • Days Won

    27

Everything posted by Mads

  1. We do both. As I mentioned 3.5 byte times is extremely short for a Windows-machine, at least at higher baud rates but most control systems for example make this time configurable anyway. That's the main point - the master should not issue commands too quickly in a row. It's not really a problem if there are slaves that will reply faster, because getting non-relevant messages garbled by having both the poll and reply together in the same serial buffer is no problem (it should just be discarded anyway). The problem is if the master shoots another message addressed to your slave too quickly after getting the previous reply. Then you are not able to separate that relevant data from the previous irrelevant messages.
  2. Oh non-broadcast messages can pose a challenge too, if you have multiple other slaves on the same line replying instantly after 3.5 byte times... Broadcasts on the other hand are seldom used in the systems we deal with, and when they are, its mostly directed towards our non-Windows targets (which in our case means sensors running assembly or Visual-DSP based code instead, with no problem handling such short silence times).
  3. Personally I prefer the modbus way here; using silence time as markers for the end of messages. And I really do not want to chew on that data in a hunt for an ID with a CRC that just probably indicates a valid message (although the chance of a false positive is low, we are dealing with a *lot* of messages here...). Calculating lengths is also messy with modbus, as many messages have unpredictable lengths (Modbus should have had a header with the length in a fixed position...Instead each function code is different, not to mention user defined ones). As for not using any of the libraries already out there I share many of your sentiments James. Linking the modbus handling tightly with the rest of the application though is something we have mostly avoided. We use an in-house library that deals with the fundamentals of generic modbus only, and have other code add layers on top of that instead just using the registers. I did have to insert some user defined function code forwarding into the library once to make a generic robust file transfer (ZMODEM inspired) and command protocol on top of modbus though...That solution allows us to upgrade the software on our subsea embedded PACs, and use various TCP/IP based protocols transparently over a serial Modbus RTU link when no Ethernet is available :-) It would have been nice to have something like that be part of the Modbus standard instead though.
  4. Basically the RTU slave should continuously check its serial port, and if there is any data there it should recheck the port until no new data has been received for those n milliseconds. Then it should process the received message. If the message is intended for a different ID it should just discard the data (clearing its serial buffer for another message). Using 3.5 character silence time can be a bit challenging yes. Windows at least seems to add delays every now and then (it's not deterministic...), so even in a tight loop you can get into that kind of territory (typically you will see this as a problem that increases the longer the Modbus messages are, as the probability of an incorrectly detected halt will then increase...). Luckily most of the time the Modbus Master can be configured to put more silence between polls (interframe delay or similar configuration), making it easier for slaves to clear their input buffer between polls to receive separate messages. We have dealt with most of the control systems used in the oil and gas industry over the years, and have never had an issue after the initial setup (All of them can extend the silence time if needed, and/or chop the messages into shorter ones). Instead of 3.5 byte times at a baud rate of 19200 for example, we might end up using 10-30 ms.
  5. We found the solution now - and it was not the firewall. Two cookies had to be deleted, but only those two (so deleting all cookies did not solve anything). The two cookies were an_ca and an_cci. Perhaps the reason was that the two contained street addresses with Norwegian characters...I do not know that yet. I've sent the info to NI.
  6. Seems to have been triggered by an automatic update on a Palo Alto firewall yes... It does not block anything when I'm not logged in, and the URL does not change when you are logged in so I have not pin-pointed what the issue is yet, but anyway. (Created a Virtual Machine on Google Cloud just to prove the issue to the firewall admin. Quick and easy solution if you need access to a clean test PC on the outside of the firewall by the way...).
  7. Yes, I do not think it is a global phenomenon...I'm one of the few lucky ones
  8. Not related to lavag.org, but forums.ni.com...and the problem prevents me from posting anything there, so I'll try here: As soon as I log in on forums.ni.com, regardless of which machine or browser I use, I get server error 500 on all pages on forums.ni.com. So I'm effectively locked out of ni's discussion forums. I've tried requesting help on this by creating a service request at ni.com, but the only reply I got back was that they did not handle web site trouble so please post it here *** instead...Which I did, with no reply at all. Has anyone else experience this? Error 500 is server side as far as I know, and as I mentioned it does not really matter what I use to browse the page...so my guess is that there is something wrong with my account that NI has to fix.
  9. Rigid VI Implastic VI In-elastic VI Change-resistant VI Less adaptable / Low-adaptable VI High-friction VI Dodo-VI High-maintenance VI Costly VI Slow-moving VI Detour-VI Roundabout VI Brick of a VI .... 204. Unpliable VI...
  10. The problem does not seem to be Linux RT as such, but the cRIO-9030 - so probably the fact that it is x64. I took the same code and ran it on a cRIO-9063 (ARM) which also runs Linux RT, and it works as it should.
  11. I copy them to a Windows computer and open them there with Explorer. I just reduced this down now to compression of a single file on a cRIO-9030 with LabVIEW 2015. The input and output files, plus the source (just the top level) is included in the attached file. Zip test.zip I've also attached a capture of the subVI front panels in the call chain, if that might offer any clues.
  12. It seems as far as I've established so far that everything that gets passed on to the linked library calls are the same on the Linux target as on Windows, (the file attributes e.g. are calculated to be 0) but the resulting file has minor changes (attribute flags I presume; I have not gotten into the details of the format yet) in the first and last lines of the archive. Otherwise the content is identical (in other words the files are there, but incorrectly identified as directories so I do not get to their content). Perhaps it has never worked on these Linux RT targets :-O I hope I'm wrong. I know the deflate/inflate works on Linux RT targets, but discovered now that for the directory compression a different method had been put in place at some point in time for the cRIO-9030, so I was wrong when I said it had worked before...It was, it seems, just due to an override on those targets.
  13. LabVIEW 2017 with OpenG Zip Tools 4.1.02 on cRIO-9030 (Linux RT target). I had a piece of code developed in LabVIEW 2015 for a cRIO-9030 where a folder is compressed with the Zlib Compress Directory function, and it has worked for years...Now I converted it to LabVIEW 2017, and for some reason any files (lots of .ini files) in the directory will now end up as folders. What is even stranger is that if I debug I can see that the files are detected as files as they should by the zlib file information VI. I seem to remember having run into a similar issue a long time ago, but I cannot remember what the solution was.. Or perhaps it is a new issue on 2017? I'm in a bit of a hurry so I'm throwing out a question here just in case I get a response before I've spent more time on it...Any ideas?
  14. The biggest headache with USB is BadUSB. I trust NI will make sure their USBs are OK though, I do not miss the DVDs (CD-ROMs? That must have been back in the LabVIEW 7/8 days ;-)).
  15. I tested 2017 vs 2015 now, and the behavior does seem to have changed (have not checked the behaviour in 2016 though, skipped that version): In 2015, double-clicking a preallocated reentrant sub-VI in both edit or run-mode (not actually executing it, just double-clicking on the VI while the code is running) will open the front panel of a clone (noted in the window name). In 2017, it opens the master VI (the cloned window is not available until the subVI is executed).
  16. Nikita, have you signed up for the 2.0 beta? Having cooled off a bit about the decisions NI has made with NXG, I'm spending more time in testing it (even though it is not supposed to cover many our needs for many years to come I do not want to have to start from scratch then), and providing feedback to NI (I'll probably fill up their in-box ) in forums and through the in-built (did not find it at first, it's the talking bubble at the top right) feedback function.
  17. Apparently the CG term is not supposed to be used. I would personally prefer to have an explicit way of referring to the...non-NXG versions, but the official names are LabVIEW and LabVIEW NXG.
  18. I mentioned workarounds in the post, and this in one of them. A bad one. You end up wasting way too much real-estate on this. Here's one for the NXG idea exchange; make a compact version of the IDE surround the panel/diagram (even make this part optional), and let larger items like the palettes magically appear at the cursor with a mouse-click ...oh, wait...). Not often. And NXG is like shooting yourself in the foot then, to get rid of a mosquito. None of these require NXG to be introduced. NXG would be a great new NI MAX, and some of its functionality would be great to have integrated into LabVIEW 2017/18 too. It's the whole other list of unnecessary changes and OK changes but released in an infantile NXG. I'm sure Shaun is spinning in his chair...but let me chime in here as well: Allow me to put the LabVIEW RT environment and run the same code as I use on Windows desktops (which is what we do today, thanks LabVIEW CG!) on a low power, low cost Linux SBC with plenty of serial IO and dual Ethernet (no such thing from NI other than the SOM, and then only if you design your own carrier board(!)) and I'll grab that opportunity over cRIOs faster than you can say NXG. Today we actually rip out the inside of cFP-2220's and put them in subsea instruments, just because that's the best option available unless we move away from LabVIEW.
  19. I saw NXG through the tech preview, and there were a few of us that protested in the forums there.My hope was that it was mostly just experimentation,, not something they would release as it was. You have an optimistic view on things in the illustration. This time around I do not think it is just a question of users not wanting to change though. Every company making large changes can tell themselves that , and choose to insist on their planned direction, but quite often its just a bad product. There are positives with NXG (if it was the next generation of NI Max for example), but they do not justify and cannot outweigh the negatives (when supposed to be the next LabVIEW). This is the first time a release from NI has gotten me more interested in the competition than in the new NI products.
  20. The MDI/tabbed interface solution for VIs seems to be one of the most fundamental flaws of NXG GUI. One of the biggest strengths of LabVIEW CG is that it enables and encourages continuous testing. You can have the user interface of your VIS (the whole application you are building for that matter) *on screen*, shown as it will be when built, not as a page within an MDI interface) and provide input to it and view the output - while you at the same time view and change the code of various VIs...tweak, run (as soon as the code is not broken), input, view output, tweak, run...etc. Just the idea that it is OK to require the developer to manually tab between the front panel and the code is ridiculous. It is scary how they can think that's a good change in the work flow, but then again - the text programmers are not used to much of what is (was) great in LabVIEW so that might explain why such things seem expendable. Now you can say that there are ways around this , or that it can be *fixed* in future versions of NXG - but the problem with that is that it would also require a complete redesign of the rest of NXG. There is too much stuff in NXG relying on it. Do not get me wrong here - I would love it if someone told me wrong and showed me the NXG light. Where are all the NXG evangelists? What do the Knights of NI for example really think about NXG and the road ahead? Are they worried or even angry too, or do they think this is the best thing since sliced bread?
  21. That's a pretty good description of how it feels to me too. A fancy interactive configuration tool, first and foremost. We're back to the "no programming", NI Hardware-centric marketing dream, and not the fun graphical cross-platform programming environment that can be used for general application development. Windows Metro for the desktop comes to mind too... One of the first things that bugs me when working in NXG 1.0, and unfortunately 2.0 as well, is the whole concept of having everything in one window. It's claustrophobic, and feels like I'm stuck in - you said it, NI MAX...I want to break out those front panels, view them as I would want them in my built application throw some of the block diagrams to another screen to do some debugging while interacting with the front panel at the same time. Get rid of all the overhead of the surrounding interface etc...I can see how you can make multiple complete IDE tabbed windows, but where is the WYSIWYG for built applications in that? Previously I considered what NI did with G/LabVIEW to be similar to what Apple did with MacOS; they created a revolutionary environment that made it much more intuitive and fun to use computers/develop applications (and just like with the Mac, only the smartest people understood that this was the future ). It did mean that there were limits as to what you could achieve...and we've all been spending lots of time trying to work around those for years...(hoping to get better native graph controls and a few thousand other things) but there was enough functionality and flexibility there to make the joys of graphical programming worth it. The last couple of years I've found myself frustrated by the fact that things seemed to move backwards/away from that philosophy with things like Linux RT; gone were the days when everything was available wrapped in a nice graphical interface; I suddenly found myself writing bash (!)-scripts to get even quite basic stuff done. When I saw the demo of the web functionality in NXG 2.0 I got some of the same feeling; In the good old days (yeah yeah) I would not have expected it to be seen it as a positive that the HTML-code was just around the corner....to me the whole concept of LabVIEW is to provide a 100% graphical editor, it should allow you to do 99% of what you want to do *graphically*. The HTML should be accessible too, sure, but not "in your face". Has the text programmers behind LabVIEW gotten too much influence? Thinking that LabVIEW is really just for non-programmers (so you really need to make it a configuration tool instead), and if you want to do some real programming you should work like they do - with text, and an IDE as close to the ones they are used to as you can get? Oh well, perhaps I should cool off ....and force myself to test it more, or maybe not (so far that has made me angry pretty quickly).
  22. Sure, trying to get feedback from people on something in beta already that has already been changed as dramatic as NXG is rather unfruitful...it's too much too late. Especially if most of the users that might have feedback for it stay away because of previously signed NDAs. We could always dream of access and influence at every stage, and hope that our personal views won the battles, but I do not think that would be productive. In general revealing too much about future products is bad for business (in this case for both customers and NI). If the news is about future *updates*, you have a positive effect though. Then people know that they will gradually get more and more out of their investment, they will not need to throw the old cell phone in the bin and buy a new one to get access to a new feature.You could say that because NXG is given to existing owners of LabVIEW CG (not sold as a separate product, that would have been terrible) it could be consideres an update, but it's more like halting the development of the software on your existing cell phone, and giving you a new one with some new features...but unfortunately you cannot use it to phone anyone for the next couple of years (use it on your cRIO projects for example). So now you have access to some new features, but you have to carry around two phones...
  23. Did you just read my mind? Perhaps I was reading yours <Old man (well, middle aged) yelling at the sky> We have been waiting for major upgrades of LabVIEW for years, and after so many years without much progress it turns out NI has really just abandoned the ship to build a different one, not sail-worthy until 2020. Where does that leave LabVIEW TG (This Generation) but dead in the water? The road map does not exactly encourage us to base our business on it. We either spend time and money on staying updated on the next generation stuff for many years until it actually can replace this generation, or move away from NI. Frankly I would have preferred it if they kept us in the dark, working on NextGen for another 4 years until it had reached "parity", and only *then* told the world about it. </Old man yelling at the sky>
  24. I'm mostly worried about what this means for the regular LabVIEW and its users. Does it leave us with a half-dead LabVIEW alongside an incomplete (and unfortunately in many respects less user-friendly) NXG for many years to come? Do we have to choose between an old-fashioned/outdated parent, and an infantile NXG? Eventually getting forced to the NXG due to the age of the regular version? I was really hoping that they could transform the underlying technology in a few major jumps, but avoid alienating the current users by a) keeping the functionality (hardware support etc) and b) changing the GUI more gradually. Or just make a clean cut. As it is now I'm afraid we might get a division between the large user base which really needs the functionality only supported by LabVIEW 2017 (or earlier) for many years to come, and a smaller next generation of users which will adopt the new user interface easier as they do not have experience and investments in the regular LabVIEW already, but which will also limit itself due to the lack of hardware support etc. in NXG. Perhaps someone attending NI Week can ease my worries (or reaffirm them)?
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.