Jump to content

ShaunR

Members
  • Posts

    4,871
  • Joined

  • Days Won

    296

Everything posted by ShaunR

  1. Well. It's probably a bit more than that. How do you know how big of an array to pass? If data overwrites the end of the array it will crash LabVIEW.
  2. I haven't looked but it sounds like a c string issue. Rather than returning an array of bytes, a c string type is used to get the data into LabVIEW. Often people prefer the c string because it only requires one call forgetting it can't be used on binary data, whereas to get an array of bytes you usually have to call the function first with a NULL array of length 0 to get the length then call it again with an array of the right dimension size (if there is no dedicated function for that purpose).
  3. I don't know about Zxing but the LabVIEW bar code reader reports it is a Pharmacode with the string "1314"
  4. Never expose a database directly and always always use TLS or SSH tunnelling. Use certificate pinning wherever possible. The preferred method is a web server to authenticate and then HTTPS or websockets depending on the type and frequency of data. The current trend is for web APIs which you can easily do in LabVIEW and insulates your software, somewhat, from SQL injection.
  5. The VI is incomplete. If you press the run button it will show you the errors and most of them will be unwired inputs. The TODOs need to be implemented.
  6. Isn't that what non-disclosure agreements are for?
  7. I think you are going to need NIs input on this one.
  8. Or do you want a compile time of 7 hours instead of 20 minutes
  9. The main issue with TestStand is it tries to be all things to all people. It's pitched as a test sequence engine but is too complicated and cumbersome for that. The main UI is far too complicated for production and the "screen" hooks are awkward and difficult to implement. Reports seem like an afterthought and the LabVIEW hooks are prone to crashing. If you thought global variables were bad, well we have have several different varieties with different scopes and figuring out where things are defined or coming from is a very deep rabbit hole. I greatly simplified my life when using Test Stand by having a single VI TCPIP connector that just emits an API string which you define in the test stand editor and running a service VI that receives the string and invokes the actual tests-basically reducing test stand to a command/response recipe script to order tests, retrieve results and throw up a big PASS/FAIL at the end. At that point it really doesn't matter what generates the API strings - test stand or a custom sequencer.
  10. Difficult is a subjective term. I find anagrams difficult.
  11. No. It is a "service". It's a service, so you cannot put it on a drive, you have to install it then communicate over TCPIP See #1. If you really want a file based relational database take a look at SQLite. SQLite supports DB files up to 140 terabytes-good luck finding a disk that size 2G partition sizes are only an issue in WinXP and with fat32 disks. Modern OS's and disk are not an issue. Be warned, though. There are caveats in using SQLite on network shares. However. If it the use case is configuration which is written to rarely (and usually by one person) then it will work fine on a network share for reading from multiple applications. The locking issues mainly come into play when writing to the DB from multiple clients. Note also this is not a very efficient way to access SQLite databases and is an order of magnitude slower If you are going to be logging data from multiple machines, then MySQL/PostgreSQL is the preferred route. I usually use SQLite and MySQ together - SQLite locally on each machine as a sort of "cache" and also so that the software continues to operate so as not lose data when the MySQL server is not available. In this way you get the speed and performance of SQLite in the application and the network wide visibility in MySQL for exploitation. It also gives you the ability for the machine to work offline. If you are going with MySQL then it is worth talking with your IT department. They may be able to set it up and administer it for you or provide a machine specifically for your needs. They usually prefer that to having a machine on their network not under their control, with network wide visibility, and it will give you a good support route if you run into any difficulties..
  12. It can't. I removed all my <very old> software from LavaG a while ago.
  13. If you don't post any code to show that you have at least tried, then it looks like you are trying to get us to write some school homework for you. Show us what you have tried and we will help.
  14. Post your solution and we will take a look.
  15. The NI-9219 has a programmable constant current source and +-25mA is the range, isn't it?. The examples show a default excitation current of 50 or 100uA. I don't know what the resolution is off hand so don't know if it can go down to 10uA, though.
  16. I'm not sure if I'm understanding this correctly but you may be able to use memory mapped files. (I'm going to ignore the fact you are talking about TDMS because that may complicate things ) So you would create a mmap file twice as big as you need (right, I know. crystal ball time ). Then write file one from the beginning and file two from half way through. You can read out the data in any order, you like while it's being written by just addressing the bytes directly. So. For example. You could read line one from file one and line one from file two (half way through the map) and show those however you like. Alternatively you write each file into an mmap and read from multiple maps.
  17. With these types of parms I usually pass in an array of bytes. ZCAN_CHANNEL_INIT_CONFIG.vi
  18. Use the librarian file \vi.lib\Utility\libraryn.llb\Set VI Library File Info.vi. It has a boolean for Top Level.
  19. DVRs are in 2009. I have thought twice about upgrading due to new features. Once was for the native JSON but it was so useless in the real world I stayed with my own. The second was the Start and Wait On Asynchronous call. My architectures could benefit from it but had no pressing need since I already have the tools to achieve the same result, Many people rave about self indexing tunnels. It was (is?) just some syntactic sugar for the old style so there were no performance benefits but It looked a bit nicer - hardly worth the risk of updating to a new version or keeping old and new versions of the code base. Some features (like the live moving of structures and wires) I detest but will suffer it if the needs must. Anything. that forces me to change my work-flow (like quick drop) I resist vehemently so you can add any of those to the list of why not upgrade. Be aware, though. I produce toolkits so the minimum version is important so that as many people can use it as possible. This is an incentive for me to go back as far as possible. It just happens that 2009 is pretty much bullet proof, no random busy cursors slowing me down and arguably still the fastest executing code. When it first came out I was so pleased that the pain and suffering of the buggy 8.x series was over and disappointed with most of the later versions for the same reasons.
  20. It worked so it was was "correctly written" without knowledge of the underlying mechanisms and a method of how it "might" work in that way (like the DVR example) did not break dataflow. That aside. Was the bug fix reported in 2017 issues responsible for this change in behaviour? If so. What was the use case that it addressed where the behaviour was errant ?(an example would be nice)
  21. He actually says they fixed it in the Kernel (~12:25) so they were off the hook because the downstream user wouldn't see it. I agree and I think I mentioned that earlier. But it looks to me like this was fixing an unrelated bug which had unforeseen consequences. It's probably a bug I've never seen in another fringe use case. I don't do this. It is something I vehemently practice as well as preach. One example was that I changed the compane of a VI in ECL.If the customer used the new examples it would be fine. If they used the old VI in their own software, the new one would be loaded and they would have to rewire. To make sure this didn't happen. I marked the old VI as deprecated and replaced it with the new one in the pallets. But I still ship the old one with the distribution so as not to break user space. It will remain in there forever. Another: The MDI toolkit was written in LV2009 (like everything ). Something broke the event refnums (like now, but much, much worse). To make it work in later versions, all the front panel event refnums have to be replaced. If back-saving; the same had to be done. There were 3 customers already using the 2009 version so I pulled the 2009 version and recompiled for 2013 (this is when it appeared on the LVTN). I now keep 2 code bases for the toolkit, Development is still done in 2009 and changes are forward-ported to 2013 - the new minimum version. I do not require the 3 customers to update their LabVIEW version to get bug fixes and features as that will break their code and there is no way for them to recover without also replacing all the event refnums. So you see. This is not the first time I've been bitten by events changing. Define "using" I develop in 2009 and recompile in later versions (see MDI toolkit above ). This is mainly because of TPLAT not supporting <2012? without putting special VIs in the code. Forgetting about 2009; 2013 is my next best/favourite choice (for stability and performance) but I have 2012 Linux and Mac versions for testing ECL so that is the minimum for that one. All the toolkits are tested in all versions on all platforms (except the MDI toolkit). From the minimum version to the latest, in both 32 and 64 bit.
  22. Perhaps you should direct your comments to them instead. I also never said you misrepresented anything. I said you misinterpreted what I said. It's not a plea to authority. It is just a better oration of my own view with a preceding explanation as to the reasoning so that I don't have to write pages and pages. Your straw man argument doesn't help your case in this context. There is little point in continuing this branch of the conversation as it has just become me defending against unfounded accusations and that doesn't impart any useful information about the issue to anyone. If you have more technical input, then by all means please do.
  23. I don't think it is a "read only on first call" and I never said as such. I intuited that it may be as described in the DVR example and was "a feedback node/shift register that was "initialised on first call". That was my observation of its behaviour and I wrote code around that. It obviously does work because your example taken from jacks stuff was written way before 2017 (2015, in fact) and before the change in behaviour and my examples show that it also works as expected when registering and re-registering in pre 2017 versions. I think you have just miss-interpreted what I said and then proceeded to jump up and down about the semantic meanings. I have been very clear, with examples, of what the issue is; what the change has been, and we have all discussed various work-arounds and alternatives. I may be a lone voice in thinking that it is unacceptable to change a behaviour that worked fine (as far as I could tell) for a decade but then I have to rewrite code, so it's understandable.
  24. It relates to drjdpowells comment: Which I later went on to say: There was confusion between "refnums". I took it to mean the event "prototype" refnum (since that was my first example use case) and he was talking about contained renums. What I said was not "rubbish", though.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.