Jump to content

ShaunR

Members
  • Posts

    4,882
  • Joined

  • Days Won

    296

Everything posted by ShaunR

  1. Just spit-balling but you may be seeing the effects of lingering. (or not lingering as the case may be) You can try setting the linger ON with a zero time-out (using setsocketoption) which will cause an abortive close. There is a VI that enables one to obtain the underlying raw TCPIP connection that can then be used by setsocketoption. It's usually used for TCP_NODELAY
  2. Personally. I would start by looking at the TCP/IP connection. In your original code; you are opening and closing the connection with every image (or every couple of images).That is inherently slow. I would be looking to keep the connection open during acquisition and trying to stream the data to the receiver. You haven't stated your image size and frame rate but a 640x480 image at 30fps is about 70Mb/s which is achievable even on a 100Mb wired connection. This is what I was hinting at when talking about optimising and that it probably wasn't sufficient in its current form. Additionally, it would also solve your closing prematurely problem.
  3. Try both and see which one fits your project. I think you will find your current VI will be insufficient for what you are trying to achieve. The thing to be aware of with the producer/consumer it that it always ends with the same quandary - what to do if the consumer cannot keep up with the producer? If your current code can keep up, that's fine. If it can't; you need to optimise. If it still can't then you need to send pointers rather than data (kind of an optimisation). If even that doesn't work, you need to decide what data to lose. If you can't lose any, then........ Yes. As it stands currently, inside.
  4. Take a look at the producer/consumer. There is a project template when you create a "new" project.
  5. Database. Learn about graphs and advanced data structures. I would recommend SQLite, MySQL or Postgres (and you shouldn't be putting it all in a single table.)
  6. At a very simple level just add a conn.send('ACK') after print('File has been saved.') in your Python code. Then in your LabVIEW code insert a Read.vi before the close.
  7. Yeah. It's not ideal since it depends on how much data is being sent. A more robust method is to send an ACK back so that a read function (before close) blocks until all bytes have been received.
  8. Add a delay between the last write and close and make sure you are not terminating (closing) the connection before all bytes have been transmitted.
  9. But I was promised in the 1980s that we would all be working 2 day weeks by now because of the automation. That would be a better way to solve the traffic problem
  10. We should start using the Deci calendar immediately
  11. OK. That's fair enough. One thing to be aware of are functions that you can request, say, 1024 bytes but they can return less. In that scenario the length is often written to with the actual bytes transferred. Then that "actual" value should be used with a resize array on the data read. With those types of functions it is also a common error to choose "value" instead of "pointer to value" for the length since 99.9% of the time it seems to work fine with "value"....until LabVIEW crashes 7 hours into a test
  12. Well. It's probably a bit more than that. How do you know how big of an array to pass? If data overwrites the end of the array it will crash LabVIEW.
  13. I haven't looked but it sounds like a c string issue. Rather than returning an array of bytes, a c string type is used to get the data into LabVIEW. Often people prefer the c string because it only requires one call forgetting it can't be used on binary data, whereas to get an array of bytes you usually have to call the function first with a NULL array of length 0 to get the length then call it again with an array of the right dimension size (if there is no dedicated function for that purpose).
  14. I don't know about Zxing but the LabVIEW bar code reader reports it is a Pharmacode with the string "1314"
  15. Never expose a database directly and always always use TLS or SSH tunnelling. Use certificate pinning wherever possible. The preferred method is a web server to authenticate and then HTTPS or websockets depending on the type and frequency of data. The current trend is for web APIs which you can easily do in LabVIEW and insulates your software, somewhat, from SQL injection.
  16. The VI is incomplete. If you press the run button it will show you the errors and most of them will be unwired inputs. The TODOs need to be implemented.
  17. Isn't that what non-disclosure agreements are for?
  18. I think you are going to need NIs input on this one.
  19. Or do you want a compile time of 7 hours instead of 20 minutes
  20. The main issue with TestStand is it tries to be all things to all people. It's pitched as a test sequence engine but is too complicated and cumbersome for that. The main UI is far too complicated for production and the "screen" hooks are awkward and difficult to implement. Reports seem like an afterthought and the LabVIEW hooks are prone to crashing. If you thought global variables were bad, well we have have several different varieties with different scopes and figuring out where things are defined or coming from is a very deep rabbit hole. I greatly simplified my life when using Test Stand by having a single VI TCPIP connector that just emits an API string which you define in the test stand editor and running a service VI that receives the string and invokes the actual tests-basically reducing test stand to a command/response recipe script to order tests, retrieve results and throw up a big PASS/FAIL at the end. At that point it really doesn't matter what generates the API strings - test stand or a custom sequencer.
  21. Difficult is a subjective term. I find anagrams difficult.
  22. No. It is a "service". It's a service, so you cannot put it on a drive, you have to install it then communicate over TCPIP See #1. If you really want a file based relational database take a look at SQLite. SQLite supports DB files up to 140 terabytes-good luck finding a disk that size 2G partition sizes are only an issue in WinXP and with fat32 disks. Modern OS's and disk are not an issue. Be warned, though. There are caveats in using SQLite on network shares. However. If it the use case is configuration which is written to rarely (and usually by one person) then it will work fine on a network share for reading from multiple applications. The locking issues mainly come into play when writing to the DB from multiple clients. Note also this is not a very efficient way to access SQLite databases and is an order of magnitude slower If you are going to be logging data from multiple machines, then MySQL/PostgreSQL is the preferred route. I usually use SQLite and MySQ together - SQLite locally on each machine as a sort of "cache" and also so that the software continues to operate so as not lose data when the MySQL server is not available. In this way you get the speed and performance of SQLite in the application and the network wide visibility in MySQL for exploitation. It also gives you the ability for the machine to work offline. If you are going with MySQL then it is worth talking with your IT department. They may be able to set it up and administer it for you or provide a machine specifically for your needs. They usually prefer that to having a machine on their network not under their control, with network wide visibility, and it will give you a good support route if you run into any difficulties..
  23. It can't. I removed all my <very old> software from LavaG a while ago.
  24. If you don't post any code to show that you have at least tried, then it looks like you are trying to get us to write some school homework for you. Show us what you have tried and we will help.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.