Jump to content

ShaunR

Members
  • Posts

    4,849
  • Joined

  • Days Won

    292

Posts posted by ShaunR

  1. You're lucky I can't take them back. :P

    Touche wink.gif

    That diagram doesn't do what I need. I need to keep every bit of data from the 16-bit ADC; your diagram is going to lose precision. I could avoid that by having them enter all the scaling information in the ini file... except I'm collecting data from m sensors simultaneously and there are n possible sensors to choose from to connect. On top of that any sensor can potentially be hooked up to any terminal. And they'll need to be able to add new sensors whenever they get one, without me changing the code. Oh yeah, it has to be easy to use.

    Can all this be done with an ini? Sure, but the bookkeeping is likely to get a bit messy and editing an ini file directly to control the terminal-channel-scale-sensor mapping is somewhat more error prone than setting them in Max. Implementing a UI that allows them to do that is going to take dev time I don't have right now, and since Max already does it I'm not too keen on reinventing

    the wheel.

    I don't think your technique is bad--heck I'm ALL for making modular and portable code whenever I can. This is one bit of functionality where I need to give up the "right" way in favor of the "right now" way.

    It was just to show branching. What the numbers are is irrelevant. that's why I don't understand your difficulty with reading one value and showing another. I could just as easily read an int and displayed a dble.

    But anyway.......

    Just saving the ADC won't give you more precision. In fact, the last bit (or more) is probably noise. Its the post processing that gives a more accurate reading. You usually gain 1/2 a bit of precision and with post-processing like interpolation and averaging, significant improvements can be made (this is quite a good primer). What s the obsession with saving the ADC?

    Now. From your n and m descriptions, I'm assuming you're thinking nxm configurations (is that right?). But. You don't care what the sensor is only that it has an analogue output which you can measure. You can't log data from nxm devices simultaneously because you only have m channels. So you only have to configure m channels (or the engineers do at least).. If you allow them to make a new task every-time they change something, the list of tasks in MAX very quickly becomes un-manageable. We use 192 digital IOs for example. Can you imagine going through MAX and and creating a Task for each one?

    What you are describing is a similar problem we have with part numbers. Its a management issue rather than a programming one. We (for example) may have 50 different part numbers, all with different test criteria (different voltage/current measurements, excitation voltages, pass-fail criteria etc, etc). But they all use the same hardware of course, otherwise we couldn't measure it.

    So the issue becomes how can we manage lots of different settings for the same hardware. Well. One way is a directory structure where each directory is named with the part number and contains any files required by the software (camera settings, OCR training files, DAQ settings, ini-files, pass/fail criteria....maybe 1 file, maybe many). The software only needs to read the directory names and hey presto! Drop down list of supported devices. New device? New directory. You can either copy the files from another directory and modify, or create a fancy UI that basically does the same thing. Need back-ups? Zip the lot biggrin.gif Need change tracking? SVN!

    Another is a database which takes a bit more effort to interface too (some think its worth it), but the back-end for actually applying the settings is identical. And once you've implemented it you can do either just by using a case statement. biggrin.gif

    What you will find with the NI products, is that really there are't that many settings to change. Maybe between current loop/voltage and maybe the max/min and you will be able to measure probably 99% of analogue devices. Do they really need to change from a measurement of 0-1V when 0-5v will give near enough the same figures (do they need uV accuracy?) Or will mV do! Don't ask them, you know what the answer will be tongue.gif). Do we really need to set a 4-20ma current loop when we can use 0-20 (its only an offset start point after all.).

    Heh... they're test engineers in a product development group. They play with everything you can imagine related to the product. (Read: They want infinitely flexibile test apps.) But they don't play with the data. That better be rock solid.

    Indeed. And I would much rather spend my programming time making sure they can play with as little as possible, because when they bugger it up, your software will be at fault tongue.gif You'll then spend the next week defending it before they finally admit that maybe they did select the wrong taskbiggrin.gif

  2. I'm using Firefox. I configure the output of the web service like a XML file, and I receive it well in my browser, but when I add some XML additional tags, the special characters of those tags were changed, so when i try to parse the XML response with javascript, it didn't recognize the < character like the < character.

    Use the javascript "html_entity_decode" function.

    html_entity_decode(string)

    Normal chars will remain unaffected but   &alt etc will be converted.

    Damn. Now I'm a text heretic biggrin.gif

  3. I appreciate all the responses. Kudos for everyone! :)

    Wrong site. It's rep-points here biggrin.gif

    Well yeah, but show what on the graph? Users want to see scaled data; I want to store store raw data. In a nutshell the question was about manually transforming between raw, unscaled, and scaled data.

    That's what I mean. These are mutually exclusive?

    Does this mean I can't create separate tasks in Max that all use the same physical channels, even though I'll only be using one task at a time?

    Hmm... this could be a problem over the long term. We're going to be using a single data collection computer to measure signals from different sensors, depending on the test being done. I had planned on having the test engineers use Max to set up new tasks, channels, scales, etc. and select the correct task in the test app. But if that's not possible I'll have to create my own interface for that. (Ugh...)

    Yes of course you can. But it depends if its the horse driving the cart or the other way round.

    As soon as you start putting code in that needs to read MAXs config so you know how to interpret the results, you might as well just make it a text file that they can edit in notepad or spreadsheet program and when you load it you already have all the information you need without having to read it all from MAX. Otherwise you have to first find out what tasks there are and depending on what has been defined (digital AI AO?), put switches in your code to handle the properties of the channels.However if you create the channels on the fly, you don't need to do all that. It also has the beneficial side effect that if you can do things like switch from a "read file.vi" to a a "read Database" vi (oops. I meant Read Config Class biggrin.gif) with little effort.

    However, if they are just "playing" then you are better off telling them to use the "Panels" in MAX.

  4. Hi..I'm in the process of identification on process pressure rig 38-714 using Modbus on process controller 38-300 from feedback . I want to know is how the setting and addressing the process control and process variables on the process controller 38-300.

    I previously had tried to use the NI OPC server and I get a value of 64 537 for Process variable and the value is not changed even though I have changed the parameters of the plant. any suggestions?

    All devices have different address maps for the PV. You will need to read the manual.

  5. Sorry, no helpful advice other than put off that goal for another couple years, or maybe forever if you can. (It is a noble goal--it just doesn't look very achievable right now)

    Or un-check the "show warnings" when viewing this library rolleyes.gif

    Interestingly. If you change something (like mechanical action or the unbundle names). The warnings disappear............until you save it unsure.gif

    Think you may have found a feature.

  6. I have tried both approaches, I particular like integrating with MAX in certain situations:

    Here is a brief list of my FORs:

    • MAX is usually installed (or its not a hassle to) just install the Full Driver (which the client gets on disk with the hardware anyways).
    • MAX already has an interface, no need to create one (save budget)
    • And having a GUI is normally easier for a customer to navigate than a config file API (depending on the Client of course).
    • MAX includes the ability to handle Scales of different types (linear, non-linear etc...) - again, I don't have to account for this my code.
    • Communicating with MAX is really easy using the Task-based API (through PNs etc...)
    • (So far) Clients seem to like using MAX
    • Especially if they have used it before, then you can maintain a consistent interface for hardware configuration.
    • Its easy to back up your configuration and port it over to another PC etc..

    Some of my AGAINSTs:
    • It separates your (custom) application
    • Your application has a dependency on MAX

    OK. Here are some of my FOR (nots) using MAX.

    • MAX is never installed it just bloats the installation and if it crashes, will take your whole measurement system down and you will get the telephone call not NI.
    • MAX already has an interface which doesn't fit with either our or our customers "corporate" style requirements for software (logos etc)
    • And having a GUI is normally easier for a customer to navigate and that's the last thing we want since they are neither trained or qualified to do these operations and we cannot poke-yoke it.
    • MAX includes the ability to handle Scales of different types (linear, non-linear etc...) - but this cannot be updated from integrated databases and other 3rd party storage.
    • Communicating with MAX is really easy using the Task-based API (through PNs etc...) because MAX sits in top of DAQmx so what we are really doing is configuring DAQmx.
    • (So far) Clients seem to like using MAX - do they have an alternative?
    • Its easy to back up your configuration and port it over to another PC etc. as it is with any other file based storage except text based files you can track in SVN.

    And some more....
    • Have to support more 3rd party software for which there is no source and have no opportunity to add defensive code for known issues
    • Requires a MAX installation to do trivial changes as opposed to software available on most office machines (such as Excel, notepad etc).
    • Does not have the ability to easily switch measurement types, scaling etc to do multiple measurements with the same hardware.
    • MAX requires firewall access (I think) and this can be an issue with some anally retentive IT departments that decide to push their policies on your system..
    • As mentioned before above. Cannot integrate 3rd party storage such as SQL, Access, SQLLite, databases (mentioned again because it is a biggie). Or indeed automated outputs from other definitions (like specs)
    • MAX assumes you have a mouse and keyboard. Its very difficult to use with touch-screens operated by gorillas with hands like feet..

    But I think our customers are probably a bit different. They don't want to "play". they just want it to work!. And work 7 days a week, 24hrs a day. We even go to great lengths to replace the "Explorer" shell and start-up logo so operators aren't aware that its even windows.ph34r.gif

    Our system is quite sophisticated now though. It can configure hardware on different platforms using various databases, text files, specification documents etc and it can be invoked at any time to reconfigure for different tests if there are different batches/parts. Its probably the single most re-used piece of code across projects (apart form perhaps the Force Directory vi..laugh.gif. I tend to view MAX in a similar vein to express VIs. tongue.gif But that's not to say I never use it.

    • Like 1
  7. NI's PSP protocol seems pretty nice, and it sure is a fast way to share data/messages between a Target and a Host application.

    I haven't tried the Network 'this is the way SV were meant to be used" Streaming feature of LV2010, but that looks very cool.

    Also for tying into a Database or Alarms, is pretty straight forward, so I have found you could save a lot of work using them.

    Give it a bash. I think you'll like it (drop it below 10ms or try about 10MB of data and see what happens). Then benchmark it against the Dispatcher wink.gif

  8. Though they can be a little cumbersome. I usually avoid MAX as much as I can since its portability is practically nonexistent and the interface is straight out of the early 90s.

    Agree with most of that. But especially the above.

    We usually create the channel associations at run-time in a similar manner to this:

  9. due to the error wiring to sequence dataflow they can be a much better choice.

    That's a bit like saying you shouldn't use the GetTickCount because it does not have error terminals. The main two arguments behind global variables is that they make debugging difficult and cause race conditions across vi boundaries as well as within a vi. The use of an error cluster or not is irrelevant (I think). If you mean a choice between a global variable or shared variables. Then in-line with the ant-globalisation (lol) posse then neither should be used since they are both global variables and this is a sin tongue.gif

    Also, you are never going to use a NPSV in a time critical loop (or any serious loop), you are going to use a SPSV.

    I agree. Ooops. No I don't biggrin.gif Or maybe I do wink.gif I agree I am never going to use a NPSV in a time critical loop (and by time critical I mean on a real-time NI system). And I agree (on a real-time NI system) I am "probably" going to use the SPSV. But I am not going to use either in normal LV unless I want easy network comms (well. not even then ....rolleyes.gif).

  10. I think you are all missing the point.

    The OP was questioning the difference between global variables and shared variables in a single process system. In fact. The argument against global variables is exactly the same for SV's (SV's are "SUPER global variables). SV's have a network feature and that is the only reason people (should?) use them (but they have limitations that make them unusable in some applications.). They were designed for real-time targets but moved over to mainstream labvVew as an "easy" network comms.

  11. So... having been using Labview heavily over the past 4 years I am now, for the first time, writing an app that significantly uses DAQmx. Yep, I'm a total noob when it comes to doing those things that have historically been Labview's bread and butter.

    Background:

    Right now I have a state machine running in a parallel loop continuously reading (using a PCI-6143) analog signals from an accelerometer and gyroscope. This data collection loop posts the data on a queue as an array of waveforms. A data processing loop get the data, updates the front panel, and streams the data to a TDMS file.

    I have a task setup with all 6 channels and a scale for each channel that converts the analog signal to g's and deg/sec. Since the sensors have multiple sensitivity settings, the user will (I think) be able to create new scales and apply them to the channels via NI Max when they change a setting. All good so far.

    Each point of the waveform data gets sent to my data processing loop as a DBL, using 8 bytes of space. The PCI-6143 has 16-bit ADC's. I can cut the storage requirements by 75% by converting the DBL to an I16. If I knew the possible range of the waveform I could multiply each point by 2^15 and round it off. As near as I can tell the waveform data comes to me post-scaled and without any scaling information.

    Question:

    How do I go about showing the post-scaled data to the user while saving unscaled data and scale information to disk?

    If you save it as a single precision float you will save 50%, you won't have to do integer scaling and you will have a much better approximation. Depends how much disk space is important. Apart from that I'm not really sure what you are asking here. blink.gif The answer "show it on a graph" seems too simplistic. tongue.gif

  12. Hi Raj,

    I checked your discussion that you had and I am also implementing a similar thing. Can you please tell me the AT commands that we need to pass in the LabVIEW inorder to send sms.

    Its a GPRS Modem from Benq G 32R is the model number. Any help of yours in this regards is really appreciated. If you could share that code than it would be great.

    Hoping for a reply from you.

    Thanks and Regards,

    Priyank

    Here Ya Go.

    This'll put am sms into your unread box. You just need to send it then. This site is useful for common commands

  13. I'm not over-enthusiastic about notifiers (after my initial enthusiasm in seeing the potential). They could have been fantastic for my everyday use but for one caveat. You actually have to be waiting for it to register the notification. You can't (for example) send a notification and when the wait executes, it continues and removes it from the notifier. It will just wait. This behaviour is no good for asynchronous systems since you cannot guarantee that a notifier will be waiting when you send the message. Wait with history doesn't cut it either since the wait will execute multiple times. So you end up synchronising the systems manually to ensure the wait is always executed first, which defeats the object.

  14. I had a deadlock issue that has been dogging me for almost two weeks now, and I finally understand what's been happening. I figured I'd share the experience, because it to me at least, it seems to be caused by such an esoteric detail that I'm surprised I was even able to track it down. I'm hoping that if at least one other person learns something, the time I spent on this will be somewhat redeemed. So if you will, it's story time.

    Please examine this little bit of code. The only non-stock VI is a simple read accessor for an object. Everything else is DVR, notifier, or error related. It's LV2010 code.

    post-11742-015567100 1284733008_thumb.pn

    (Ignore the breakpoint please).

    This little bit of logic has proven to be the bane of my existence for some time now.

    I'll explain the logic briefly: This VI is meant to stop an asynchronous task that the PumpRef DVR encapsulates. The VI obtains a notifier reference, and checks for an existing notification via a 0 ms timeout.

    False case: If we don't timeout, that means a previous notification exists and the task has already stopped (this is the case shown in the screenshot), and no operation on the DVR is performed.

    True case: If we do timeout, this means the asynchronous task might still be running (yes might, just trust me on this, I said it's story time, not thesis time). So we send a signal to the asynchronous task to tell it to stop. This is not shown in the screenshot, it's in the True case of the case structure.

    We then release the lock on the DVR, and block on the same notifier. One of two things should happen:

    1) If the false case above fired, we'll just pass right through the wait since a notification already existed.

    2) If the true case fired, at this point we'll block until the asynchronous task returns, then we'll be off to the races because the last thing the task does is signal the notifier.

    Now there's a huge problem with this. The logic above is sound, but there's a very important implementation caveat about using the Wait on Notification primitive:

    Emphasis added. It's the emphasized part that bit me in the behind. The logic of the framework I'm working on is a little more complicated than the simple case I outlined here (big surprise, huh?) and it turns out that the VI will sometimes be called twice in succession. Well, guess what, in that case the logic works like this:

    First call, first Wait on Notification primitive: Timeout, the asynchronous task is running. A signal is sent (not shown, True case of the structure), and it starts the shutdown sequence.

    First call, second Wait: Blocks, eventually the asynchronous task returns, signalling the notifier, and the VI ultimately returns.

    Second call, first Wait: Returns notification, this particular instance of the prim has never seen the notification before.

    Second call, second Wait: Deadlock.

    Why a deadlock? Because the second instance of the Wait prim has already received the notification in the first call to the VI. It will never return.

    The solution you ask? Use a single element queue, and do queue previews.

    The lesson: Be very careful when reusing notifier primitives if you're expecting to be able to receive old notifications.

    Bonus points: Reentrancy does not seem to affect primitive reuse. The VI above is in fact reentrant with multiple clones flying around. If clone 1 of the VI fires first, clone 2 will still deadlock. I did not expect this (queue up another few days of debugging)...

    So cheers, and thanks for paying attention if you kept reading this far.

    -Michael

    Knarly one. yes.gif

    You don't get this problem if you acquire a reference before waiting by the way.

  15. So, I'm doing really menial work right now, but the perfectionist in me demands I do it.

    Might be going a little crazy while doing it though.

    But I had an idea.

    So I posted it.

    Basically, when editing the icon of a dynamic dispatch VI, I'd like to automatically be able to apply an icon change to all child implementations. Not unlike how icon changes propagate through all objects when you edit a library icon, only you know, now for dynamic dispatch VIs.

    You all agree?

    Oh man, that last one I fixed is off by one pixel relative to its parent...

    -m

    I feel your pain. biggrin.gif

    But why limit it to classes?. Why not also be able to apply it to a load of VIs in a virtual folder. Or be able to select multiple VIs that you want to apply it too.

    • Like 1
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.