Jump to content

ShaunR

Members
  • Posts

    4,862
  • Joined

  • Days Won

    296

Posts posted by ShaunR

  1. I was thinking about creating both a 32 bit and 64 bit DLL but would like to keep one VI interface only. A fixed always 64 Bit integer would probably work except that it is not a normal integer but really a distinct datatype that should not be connected to anything else. The enum inside a datalog refnum abuse is a nice trick to ensure this, yet most refnums in LabVIEW (except the user refnum) are always 32 Bit entities so that would be not an option. I also need to pass out some kind of refnum to manage the message hook throughout the program. In practice this is the HWND of the hooked window too, but since this refnum is only supposed to be used with functions from that library there are several ways to deal with this in a transparent way as the user does not need to be concerned about what the refnum really is. However in the message structure I do not have that luxury. The only reason for it to exist in there, is to allow a possible user of the library to use it to do something on Windows API level with it and I have no intentions on providing any wrappers for any Windows API calls to work with a private version of this refnum. It may be a always 64 Bit sort of hack really that will be done similar to what LabVIEW does when dealing with pointer sized variables.

    Yup. It is a nice trick (refnum) because of the polymorphism that can be performed and it is "the right way". However. I have found that this is true only within LabVIEW and when dealing with interactions with external code, it is better (IMHO) to leave things in the native form for things like this since you only end up type casting it back to an integer to pump it into another call. I had the same sort of thoughts with the SQLite APIs SQL ref but left it as an integer in the end since every VI ended up with a type cast in and out.

  2. As I'm working on the sidelines on this I run into a difficulty. Windows handles are really opaque pointers and as such they are surprisingly 32 bits on Windows 32 Bit and 64 Bits on Windows 64 Bit. This is a bit of a problem as the original Windwos Message Queue library contains a Windows handle in its data structure, as it completely mirrors the MSG structure used in the WinAPI. There seems to be only one datatype in LabVIEW that truely mimicks this behaviour and that is the so called user refnum. That is a refnum that is defined by object description files in the LabVIEW resource directory and as such not documented at all.

    So the question is now, does anyone know of another LabVIEW datatype that is sure to be truely pointer sized when embedded in a cluster or alternatingly is there any objection to not include the Windows handle in the message structure?

    The easy way would probably just expose it as a 32 bit value and if the user "must" have it, then make him deference with a moveblock (move the issue downstream).

    However, I'm not sure what you are asking here. If the DLL is 32 bit bit, then it can only be loaded in LV x32 (and therefore can only handle 32 bit pointers anyway). If it is a 64 bit, then that cannot be used by LV32 bit. So if the cluster uses a U64 it will be able to represent both even if it creates a "red dot" when passed to another API/DLL call which is set to pointer sized.

  3. Yes I did set it to modbus, maybe I dont know how to use the modbus drivers, I havent attempted to use such a device with Labview so I am lost with all this machine codes !

    MAX is able to validate and open a VISA session but this is as far as I can get with it.

    Well.

    The basics are that your PC will be a "master" on the bus (RS485 is multi-drop, meaning you can have multiple units on the same wires). You read/write to an address on the bus (the unit) and registers (memory locations) within the unit that correspond to configuration, inputs and outputs (outputs are sometimes called coils). Generally, the registers are device dependent so you will need the programming manual to identify them.

    However. It would be a good idea to start another thread so that we don't hijack this one.

  4. Hello,

    I tried the Hyper Terminal code that you wrote but I was not able to communicate with my device.

    I have a Omron E5CN Temperature controller that I want to use with Labview.

    I am able to communicate with this controller using Omron's Thermomini software.

    MAX is able to validate the and open a session with the device. The controller is connected via a RS232 to USB converter.

    Any advice on how to use this controller with Labview?

    The controller isn't RS232. It is RS485 (according to the docs). Therefore hyperteminal or equivalent will not work. Looks like it has two modes of operation. CompoWay/F and Modbus. Modbus is an industry standard and, if you have the NI DCS module, you can set it up with the OPC server. Alternatively you can use the VIs I linked.

  5. And here you are wrong. There is no message hook in LabVIEW or any application necessary. It's the event loop at the core of the application with GetMessage() that receives these events and then distributes it. And messing at that level with events is for sure going to open up a myriad of possibilities to lock up your application from the diagram completely.

    <snip>

    And message hooking is tricky at best and can easily create dangerous situations where you can lock up your application in very interesting ways that are almost impossible to debug without C source level debuggers.

    Nope. Still not convinced!. If they can spend oodles on POOP and trivial eye candy, then they can spend a fraction of that on standard OS event methods for the rest of us. It's not rocket science.

    GetMessage() in a while loop is windows only and a pretty poor way of doing it. If that is the method used, then it really does need a revamp. Although the currentEvent on the Mac could be an analogue, there is no equivalent in Linux (x11) as that is purely asynchronous. The bottom line is, NI have to "hook" into the OS message system (either by polling or by registering) to be able to get messages at all. They just don't publish all the messages to us. Whilst it would be nice to have a few more "generic" frames in the event structure that are available across all OSs (after all, there are a lot of similarities), that doesn't mean to say they cannot provide the raw messages in a frame so we can write platform specific software (like we do with activeX and .NET). Especially if they can't be bothered to wrap some of the common ones up for us ........and don't get me started with VISA events! :P

    I'm also not buying the "lock-up" and "dangerous situations" argument. All these methods are standard event messaging that applications must use to interact with the OSs and have well defined wrappers in most other languages. There are lots of code snippets around and they are all pretty much identical since they just call OS api functions (or X11/Xorg in Linux). Hooking events is very straight forward (as you are about to demonstrate :) ).

    If the argument boils down to "it's hard" (which I refute since they are already doing it for "some" events - for all the OSs) then that isn't really an excuse for a $4,000 piece of software from a multinational corporation that is quite prepared to come up with a whole new paradigm.

    • Like 1
  6. The problem with this is that it needs to be working on all platforms if NI integrates it into the event structure. And that part is really not easily mapped into a generic scheme. Of course NI could implement just about all 500 Windows Message events and its 2000 variants of it in the event structure and try to find according X Windows and Mac OS events. But that would make the event structure absolutely unmanagable and it would still not cover the issue when you need to interface to software that uses windows messaging for interapplication communication. So a lot of work for little benefit, and that is a true killer argument for anything.

    Nope. I don't buy it.

    Lots of features work on one platform but not another (because it doesn't exist) but that is beside the point.

    You don't have to implement the "500 Windows Message events and its 2000 variants", just expose the message hook (which they must already be using for the Event Structure on all platforms) and let the labview user filter what messages he/she wants with Gcode.

  7. This along with being able to capture every message posted by windows (which is what I think you are implying anyways). Why can't I get the Window Move message? Instead I have to get mouse down on title bar through the windows API, then register for mouse move within LabVIEW, then handle the mouse move event. Then, on mouse up, I have to unregister for the mouse move event. :angry: Unless, of course, someone has a better way.

    Rolf is about to give you a much better way ;)

  8. Not with the Windows Message Queue library as is, but the use of LVPostUserEvent() as mentioned by ned would certainly make that easy. I'll see if I can have a go at it. It would in fact simplify the DLL part enormously, since the whole queue handling in the C code can go away, as the user event will handle that internally.

    Nice.

    If NI got off their arse and spent time on the event structure, they could easily have included this feature as a native part of the Event structure.

  9. Are the dongles like http://www.smart-lock.com/ much safer?

    A few years ago we had the best protection available: spaghetti code... the worst kind!

    Our SW was so unreadable that no one could have used it. Seriously, we know of a big company in China that tried copying the logic of our source code that they stole and they simply gave up trying.

    However, now that I'm upgrading the code to LVOOP I'm afraid it won't be that hard anymore. This is the only bad thing I have to say about OO :D

    Software is like a fart. Yours is ok, but everyone elses stinks. LVOOP just ensures no-one can tell who farted :)

    • Like 1
  10. It's general good practice which makes me curious about this. What if I have an application that runs third party code, maybe plugins. I want to be sure that when it comes time to run that unknown code I have a clean memory footprint such that a malicious bit of code can't scrape an old data from memory when run from the context of my application. Or maybe the best idea is to run this code from an entirely different context-- a sandbox. This could go so many different ways and in the end you still need to worry about once that code gets executed, how can you be sure a new keylogger hasn't been spun up? If my plugins are written in native LabVIEW, there's probably nothing I can do about it, but if I have some form of scripted environment where I provide an API to work with, maybe this concern can be managed. I don't have answers to questions like this, which is why I really wanted to start this discussion.

    I'm not trying to argue someone like me should roll their own solution, I'm way too naive about these matters to do so. What I'm really after is if it's even possible for anyone to create a library that properly manages authentication purely in a LabVIEW environment? If so, what are some of the challenges/considerations that are brought up due to LabVIEW?

    This is just a topic that I keep coming back to every other year or two, and I've never come to a satisfactory end of discussion other than "I doubt it's possible in pure LabVIEW." I thought I'd see if anyone else has ideas. I believe that any authentication would have to be handled by external code, such that my LabVIEW code doesn't even get access to the password. Really all my code needs to know is who the user is, and their granted permission level if any.

    If someone has un-bridled access to the machine. Then there is absolutely nothing you can do to protect discovery from a determined effort (it is just a matter of time). It doesn't matter what the programming environment is since I could quite easily drop a hacked windows DLL and then all bets are off. Zeroing memory is a weak (but not inconsequential) way to protect passwords since I only need to fire up Soft-ice and I can see where it is in memory before you clear it. The hard part is finding it in the first place amongst the thousands of lines of code. As you can probably guess. A dialogue box is an easy way to find where to start and then following the code to find the string message sent to the OS. So it doesn't matter what code you write, that is the crack where I can place the crowbar ;). The main thing to bear in mind, however, is that a password is a means-to-an-end. A password in itself is of no use. It is the info it guards that is of interest. You could have the most secure program in the world but it won't be much good if the user writes down the password and puts it on a post-it attached to the monitor. The only purpose of a password dialogue is to prevent someone looking over your shoulder and reading it; no more. If it is a worry, then use a key and lock the PC in a room with no network.

    The issue is more about prevention and detection of malicious programs actually getting on to the machine in the first place without your knowledge (reducing the attack vectors) and, if they do get on there, preventing the info they glean from exiting the machine in a meaningful form (like your private PGP keys) or, at the very least, making it difficult to extract meaningful info if info does get out (like your customer database). Isolation from the interweb ( :) ) .goes a long way to minimising this as does not having USB ports (or those ancient things called flippies or something). If a keylogger does get your passwords, then it's not a lot of use if the file that stores them can't be sent to the intended recipient. This is why generally more emphasis is placed on encrypting data since if you assume that the passwords are unavailable then there is a lot you can do to protect private data,

  11. It's common in parallel testing on production lines (if I get what you I think you are asking).

    For example.Say you have 10 fixtures but you can only test 2 simultaneously (because that's all the hardware you have).

    When one test completes, it moves to the next non-tested fixture. In between changing from one fixture to another, all tests have to wait so that the switch MUX can re-route to the probes of the next fixture so any test in progress has to reach an arbitrary completion point (finish its current measurement) before the other test set can move on. If fixture "1" fails (i.e the overall test time is is also variable - failure results in a reduced completion time) then it must move to "3". It can only do so when the test set at fixture one has finished its current measurement (once test set "1" has moved on, test set "2" it will continue). Additioally. If "3" also fails, then it moves to "4". If fixture "2" then completes, it must ignore fixtures 3/4 since they have already been tested and move to "5" and so on.

  12. Apart from the aforementioned aspects. Another major reason is that Industrial PCs have a guaranteed availability and parts longevity with a definitive obsolescence time scale (usually 5-10 years from introduction). They also tend to come with things like RS485, serial com ports and oodles of usb ports as well as the large PCI /PCIE count..

  13. What I did was that I wired the STOP button to the motor voltage module (inside a SWITCH statement) that I kept outside the WHILE loop. So, when STOP is true, I send zero voltage to the motor. This EXITS the program as well as STOPS the motor.

    If it works. It's good.

    1. Is this a smart way of doing it?

    There are many ways of doing things and whilst there are "accepted" solutions to common problems; "Smart" is subjective. Your only objective was to "sequence" the drive shutdown since LabVIEW is inherently parallel, and there are a lot of ways you could have achieved that.

    For example. In addition to your solution, you could also have wired the error terminals on the express VIs to outside the loop (a very common way of sequencing VIs). or for that matter, the 0. You could also have OR'd the STOP boolean to the selector. Worry about "smart" when you know where all the things are in the palettes, you've got a few utilities in your toolbox, written a few programs and been on some courses.

    This program is the "first pass" (your prototype, if you like). I can guarantee you will be revisiting it making it "smarter", prettier or more flexible.

    2. This does not work if the user hits the ABORT button instead of STOP button! So, I went into File --> VI Properties and made the ABORT button invisible when the VI runs. In this situation, the user can only see my STOP button and is forced to use that to exit. Is is this OK?

    The stop button in the IDE is for the developer only. In deployed applications, users should not have access to it since it circumvents shutdown procedures by stopping the code dead. Never give the user any opportunity to do something you are not expecting otherwise he/she will!

    • Like 1
  14. Suppose a USER hits the stop button during cuff inflation (motor ON). At that point I would like the motor to stop. The program does EXIT, which is fine. But if the motor could also stop, that would be great!

    Just put another motor stop (set the motor value to zero) outside the while loop so that it gets executed after the while loop stops. I'll leave you to figure out how to sequence it to make sure it happens after the loop stops rather than while the loop is running (hint, don't use sequence frames, ever!)

  15. Hi Shaun and Asbo,

    Thank you for your replies!

    I tried what Shaun suggested. Please see pictures.

    The Boolean cross-over is detected and the motor stops momentarily. But then turns back on. Sorry but I am still a bit new to Labview.

    How do I utilize that cross-over trigger to keep it off for the rest of the recording?

    Thanks again!

    Saif

    The output will be "TRUE" if the direction is detected. It looks like your case structure is set to "False" to turn off. Try swapping the cases around.

    It is usually better to post your VI or an example of what you are trying to achieve rather than an image. For simple VIs you sometimes get back the VI with the changes.

    Thank you I didn't know this VI existed I've always been using the OpenG Boolean Trigger VI, which seems to have a better name, and icon, for the function it performs, but I would rather use a NI VI than an OpenG VI if their functions are the same.

    There is a lot of replication in the openG VIs, presumably for completeness. Not using the openG stuff forces me to look in some of the more obscure palettes.

  16. I took a look at JSON and I definitely like it better than XML. The problem in this particular case is that I will have binary data as part of the data set and JSON doesn't look like it supports that very well. I will probably just define the data format using a basic C style structure and decode/encode it in LabVIEW.

    I use base64 because everyone has base64 decoders and it has the benefit that images can be planted straight in a browser without explicit conversion (although that's probably not much of interest for your use case)

    • Like 1
  17. There are filter examples shipped with the FPGA (look for "notch filter") and there are IPs that you can download from NI that also give you more filters such as the median filter. You can even create your own IP using the IP wizard in the project manager if push comes to shove, but I've never used it (I'm lazy like that)

    It is not unusual for the derivative term to do nothing in some systems. With such a small integral, it is arguable whether you really need a PI controller even.(P only with deadband for example). However. It's "software", so what the heck eh? It works and that's the main point.

  18. Well. As you know. It is the integral term that dampens ringing and overshoot.

    +-0.02v out of +-10V is 0.2% of FSD and I expect that is within the spec of the device. So the choices as I see it in the absence of being able to increase the integral and without getting into process modelling are:

    1. Apply a 50-60Hz band-stop filter on the input (try and reduce the amplitude of the mains signal).

    2. Apply a median filter to the Derivative term (reduce the feedback sensitivity to noise)

    3. Introduce a dead-band (turn off control when within set limits-good for mechanical systems)

    4. All or a combination of the above.

    • Like 1
  19. Interesting little project that opens up a couple of possibilities (like automagically in-lining those identified as capable of being in-lined). Maybe it can be used for older versions by running it in 2012 and back-saving to a previous version (2010/2011?). Can it be extended to include or say something about subroutined vis?

    I get many error prompts (error 1026 occured at Close reference-vi ref invalid) when scanning either a directory or a project. I can get file output and do analysis though.

    A bit of documentation wouldn't go amiss as to what "complexity" actually means and what can be gleaned from it. I assume a complexity of >5 is "too complex"? Does this mean it is a candidate for refactoring?

    If a vi is marked as "Partially Optimised", what does it mean and what can be done about it, if anything? etc.

    It Needs a bit of love ;), but very interesting.

    • Like 1
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.