Jump to content

smithd

Members
  • Posts

    763
  • Joined

  • Last visited

  • Days Won

    42

Posts posted by smithd

  1. 2 hours ago, Rolf Kalbermatter said:

    NI still provides Eclipse installations to cross compile shared libraries for their NI Linux RT based hardware targets.

    http://www.ni.com/download/labview-real-time-module-2014/4846/en/

    http://www.ni.com/download/labview-real-time-module-2017/6731/en/

    Yeah they are still available, but I'm saying I've seen the NI guys on the linuxrt forum suggest target compilation. Can't find it now of course.

    In actuality the compilation part isn't much different -- running vim over ssh on an arm crio is approximately as painful as using eclipse. The part that is more difficult is the configure step. ZeroMQ has a ton of optional settings (which event mode to use? select, poll, epoll, kqueue... security? they have an NaCl based encryption scheme...etc.). You can technically figure out each one, but copying to the target and running configure is a looooot easier.

  2. 4 hours ago, ShaunR said:

    They didn't need to do that. All they had to do was enable the rendering of UTF8 strings and we could support unicode easily.

    Woops, I decided to kill off my rant but I guess you were already responding.

    I guess you're right, but I'd like to revise my comment -- I don't want to have to flip back and forth between byte arrays and strings. I want to be able to index characters out of strings, and I want to be able to search for a subset within a byte array, etc. The shift towards saying all strings are unicode without enhancing the features associated with binary byte array seems like a mistake.

  3. 21 hours ago, Mads said:

    I immediately get irritated by how disrespectful the whole thing is of the strengths of LV

     

    18 hours ago, drjdpowell said:

    That was my impression; changing everything for no good reason rather than retaining continuity with the past but changing a few keys things where there is real benefit.

    I'm fairly curious about these statements -- is the issue a lot of small incompatibilities like the ones I mentioned, or something more major?

    3 hours ago, Benoit said:

    This is not happening with other text language where stuff change, but do not receive a huge refactoring.

    Lol -- python

  4. 4 hours ago, MarkCG said:

    Turns out one of my coworkers is trying to compile zeromq for the cRIO-9068, but not having success. Anyone ideas or have the .so file available? We also have a cRIO-9038 which is a different processor architecture, maybe it will work there?

    Depends on how they are trying to compile it. Back in the 2013 time frame I got it working the way that was being recommended at the time, which was through the eclipse cross-compile tools. Unfortunately, when NI moved to the new community system all that got deleted (as far as I can tell).

    The more recent recommendation I've seen is to just compile it on the target. I can confirm that works, at least a year or so ago. You just copy the whole source directory onto the cRIO and run the commands specified in the source (configure, make, make install probably). Before you can run those commands you need to have the build tools installed, which you can get by 'opkg install packagegroup-core-buildessential' (https://forums.ni.com/t5/NI-Linux-Real-Time-Discussions/Building-software-for-NI-Linux-RT/td-p/3372279/page/3). You may find that it says a library is missing, in which case you'll usually need to look in opkg for the -dev version of the library in question. For example, I compiled drivers for a USB device so I needed to find libusb-dev and then some libusb compatibility layer as well.

    I doubt you'll find using the 9038 any easier, as its still the ni flavor of linux.

  5. I have it installed, last time I tried to migrate even a small part of a shared library everything broke. I need RT support, subpanel support, I'm not looking forward to figuring out dll calls, and I rely on drjdp's json stuff which last I heard, I thought didn't work in Nxg due to how variant parsing functions changed. Now that the strings are all supposed to be unicode, it scares me a bit to upgrade any binary-based parsing so we'd need to spend time validating that. They changed VI server open VI from having an input specifying normal/call and forget/call and collect so now its hardcoded per node, which may cause failures.

    Last time I tried to use it for a small, new project I gave up because it was too slow and I couldn't handle it.

  6. 1 hour ago, drjdpowell said:

    Tried inserting png, jpg, and bmp images (by solution 1) and they can be viewed by SQLite Expert Personal, which I use.

    I've used that in the past too, but i was creeped out today when I looked at it and saw that the downloads were http instead of https. The reason I noticed was that windows claimed the 32-bit version had a trojan. I'm assuming it was a false detection, but even if thats the case you shouldn't be serving up installers over an unsecure connection.

  7. 3 hours ago, Gepponline said:

    using SQLite Administrator

    This is a pretty important piece of information. I took a quick look at their page (unfortunately their support forums are down, and there is no way I'm installing that software on my computer, so this is all I can go on) and it seems to say it supports bmp and jpg images. from your posts, you have so far provided it with:

    • The pixel values of an image, type cast directly into a string with no metadata
    • The NI vision software flattened representation of an image.

    I'd suggest the following:

    • If you have a bmp or jpg image, use the read from binary file function with a elements to read set to -1 (to read the whole file as one big string) and pass that directly to sqlite. You may need to make sure that sqlite administrator knows the type, but i think bmp and jpg both use linux-style 'lets hide the extension inside the file' encoding, so its probably not necessary.
    • If you have an imaqdx image in memory, use writestring (https://zone.ni.com/reference/en-XX/help/370281AD-01/imaqvision/imaq_write_string/) to generate a bmp or jpg string, and pass that directly to sqlite.

     

    To give a slightly more complete explanation, if you read the first few posts on this page we discuss the sqlite type system, which is very flexible -- in this case, everything is a string. You told everyone you wanted to store images in the database, which you were successfully doing in the formats described above. You did not explain that you had a very specific tool in mind for reading those images back out, and that is where the problem lies. It was expecting the string to be a specific format, and you were storing it not as that format.

  8. 8 hours ago, bbean said:

    Is there anyway to do this without MAX? or a description of what happens when MAX executes the format? 

    Unfortunately no Windows boxes are allowed in the previous mentioned "secure" area.   So the wipe needs to be done without MAX.   Once the cRIO is wiped it can leave the secure area and all normal NI stuff (MAX, RAD, windows) can be used.   As someone told me, its the security policy it doesn't have to make sense.

    Shouldn't such absurd rules come with a budget? Like, no you can't take the machine out of the secure area but here why don't you just buy another instead? :D

     

  9. 1 hour ago, Porter said:

    it is up to you to catch the error and reset the connection. If you don't reset the connection, this can happen:

    Lol sounds like intended behavior to punish those who ignore errors ;)

    I think the missing part is that Tanner also added an incrementing counter on the send side, so you should see errors on every subsequent read because the counter will never catch up.

    • Like 1
  10. 12 hours ago, Porter said:

    TCP_NODELAY.vi: Cool... Was this ever used & tested?

    I dont think so, I forgot it was in there. Seems like a good idea though. Just needs to be added to TCP Master::Initialize Master and TCP Slave::wait on listener.

    12 hours ago, Porter said:

    Serial Shared Components.lvlib->Configure Serial Port.vi: Case structure for number of stop bits. I suggest having some override for this. Maybe have a property node for the stop bits setting. I've come across a number of situations where I have needed RTU with no parity and one stop bit.

    This is illegal per the spec. Its obviously not difficult to change this, but...its also not hard just to set the value after initializing the modbus library.

    12 hours ago, Porter said:

    RTU Data Unit.lvclass->Calculate CRC.vi: I think that there is a more efficient way to calculate the CRC using a lookup table. I'd be happy to share it when I get around to implementing it on Plasmionique Modbus Master.

    I think I see what you mean and attached an implementation. Looks to be about 3x faster (edit: 2x with debug off) to read from the lookup vs calculating it out.

    Just a thought tho, it probably makes other code around it slower by completely trashing your CPU cache (its about 1/4 of the L2 cache on a zynq-based cRIO).

    12 hours ago, Porter said:

    Serial Shared Components.lvlib->Serial Read.vi: I don't like the idea of polling the bytes at port every 8ms. Why not just read the specified number of bytes and let VISA handle the timeout?

    If visa times out it returns whatever is in the buffer. If it happens to time out mid-packet (as may be the case on linux rt or with usb-serial adapters) then you have half the packet in your read loop and half out on the bus. This isn't important for the master, since if you time out you pretty much have to flush the buffer, but for the slave its sitting there waiting for data forever, so dealing with partial packets is annoying.

    Also, this was like 6 years ago so I may be wrong, but at the time I think one of my goals was to make the serial and tcp functions act the same. Thats why TCP also uses the 'buffered' feature.

    12 hours ago, Porter said:

    Serial Shared Components->Serial Read to CRLF.vi: Why not just read until LF (let VISA read take care of this)?

    I do, thats why I make sure to enable the term char. However it has to be CRLF, LF by itself is not acceptable. You may very well ask in which situation you would get a LF by itself -- I don't know, but I do know that its a 5c chip wiggling the voltage on some wires back and forth at an absurd speed, so I figure it can't hurt to check :wacko:

    12 hours ago, Porter said:

    ASCII Data Unit.lvclass->Read ADU Packet.vi: Why is start character written to request unit ID of Serial Data Unit? Shouldn't it be the unit ID?

    Woops. That method really only exists to make sure its not broadcast, so in the 99% use case it happens to work. Otherwise ascii doesn't care.

    Fixed, though.

    12 hours ago, Porter said:

    IP Data Unit.lvclass->Read ADU Packet.vi: Transaction ID mismatch will discard the packet. What will happen on a noisy network connection with multiple transactions being sent out? See: https://github.com/rfporter/Modbus-Master/issues/1

    I don't actually understand what the problem was with this. As you said in your comment on that issue, each access should be synchronous for a given master or slave, so there is no such thing as multiple transactions outstanding on the connection. The transaction ID check just sort of verifies that. If an error occurs, you must close the connection and reset. I can't think of any reason that would not be the right response, can you? I'm also confused by this because that part of the code was implemented by Tanner, who is the person who posted that issue to yours, so presumably he thought that code fixed the issue?

    Note that this behavior (close and reopen) is different from serial (wait, flush the buffers, and hope things start to work again), not because the serial way is better but because serial is a 5c chip twiddling the voltage on some wires. The serial version has no connection to close.

    12 hours ago, Porter said:

    Why does TCP Master/Slave need Protocol Read to CRLF.vi?

    The idea of the pluggable transport was to support any modbus adu over any network type. I've definitely heard of RTU over TCP, this would be to support ascii over tcp

    the real answer is that it was easy and it kept the implementations mirrored.

    crc bench.vi

    crc.vi

    • Thanks 1
  11. I can't see what you're actually doing but I'd say this seems expected...sqlite has somewhat squishy types.

    https://www.sqlite.org/datatype3.html

    "Any column in an SQLite version 3 database, except an INTEGER PRIMARY KEY column, may be used to store a value of any storage class."

    "Any column can still store any type of data. It is just that some columns, given the choice, will prefer to use one storage class over another. The preferred storage class for a column is called its "affinity"."

     

    If that makes you feel bad, just remember that almost every interaction you have with your bank relies on a technology whose type system is pure madness.

  12.  

    14 hours ago, Diegodemf said:

    i think i will still need to gather a good amount of measurements to send it in a big packet to improve the speed.

    However this will increase latency. That is the tradeoff. To be reasonably efficient with the network you'd need to transfer bursts of ~20 samples over wired ethernet. 

    14 hours ago, Diegodemf said:

    the application has to work on real time.

    How real-time is real time? What does the labview code do with the information?

    14 hours ago, Diegodemf said:

    The most important thing is reliability

    Then my answer would be that you should write to a file, but as above it depends on exactly how important it is. What happens if you lose a sample? What happens if the network disconnects for 30 seconds?

    14 hours ago, Diegodemf said:

    I have thought of ftp/scp but i haven't found a way to connect those services directly with labview and automate the process of reading on the PC and erasing the file on the rpi

    Yes, you would have to split the file up yourself. The strategy I follow is to have a directory /data with a file /data/temp.ext and a subdirectory /data/done which contains data1.ext, data2.ext, etc. I then use the built-in labview FTP (I believe the connectivity palette but I'm not sure) functions to fetch files out of the /data/done folder. That way I don't accidentally pull off a half-finished file. This appears to be automated in python: https://docs.python.org/3/library/logging.handlers.html#rotatingfilehandler and https://www.blog.pythonlibrary.org/2014/02/11/python-how-to-create-rotating-logs/

    If you don't like FTP for some reason, someone put together a winscp wrapper library: https://lavag.org/topic/20474-free-labview-winscp-library/?tab=comments#comment-124582

     

     

  13. There does not appear to be anything standard about that 'normal' file. I don't think I've ever seen such a file before.

    To give you a clear picture of the complexity of what you are asking, and why drjd is quite right....just looking at that example I would assume the following about your file:

    • The entire file is a single json object
    • The first line is the 1 and only key within that object
    • there is a CRLF between the key and its value
    • The rest of the file is the value of the single key
    • The single element is itself an object
    • This object consists of key-value pairs separated by "-" and delimited by CRLF
    • Every value is a string
    • Your file cannot contain strings which contain CRLF

    Following these rules, your json output would be: 

    {"Details": {"Name":"abc", "Organization": "anonymous", "Location": "xyz"}}

    If that is what you want, then my only answer is that you will have to procedurally parse the entire file from a string and construct the json object manually.

    If that is not what you want, then you should figure out the rules for your "normal simple" file. Once you figure out the rules, you will still have to manually write the code because that isn't a standard computer-parse-able format that I am aware of.

    If there are no rules, then you may need to consider a different strategy for whatever it is you are actually trying to accomplish. For example if you are making your files into json to make them easier for a computer to parse, then you'd be better of just shoving all these "normal simple" text files into elastic search or splunk and never look at them again.

  14. I'm super confused...it sounds like you just want to stream data, right?

    So latency isn't important -- just throughput. So turning off nagle won't help.

    However you're introducing a latency issue by having LabVIEW request data by sending a string with "?" in it?

    The stated goal is 7 kHz * 50 bytes, which is 2.8 Mbit/s -- hardly anything. But when you introduce round trip communication into the streaming pathway, you're introducing slow performance.

    Can you clarify if there is an actual  need for this back and forth?

     

    How important is reliability? If latency is unimportant and reliability is very important, I'd suggest writing your data to disk on the raspberry and then ftp/scp/webdav-ing it off.  If reliability is not important, and this is on a local network, consider UDP.

  15. On 8/20/2018 at 7:02 AM, Porter said:

    At the time, I was using the actor framework for a large project. I had multiple com ports and multiple devices on each com port. I decided to have an actor per device. Each modbus device actor would build its own modbus instance from the device's configuration file. Since the modbus instances were thread safe, I didn't have to worry about sharing a com port with multiple devices. Devices on the same com port simply wait in a FIFO queue for the port to be available

    I got back to looking at the code today, I had forgotten but it looks like I did make the object thread-safe. All of the request-response calls are mutexed.

     

    For giggles, I spent some time on it today and just pushed a few changes to a fork here: https://github.com/smithed/LabVIEW-Modbus-API

    I didn't like how my old team did the transaction id fix, so thats one commit. The second commit is to fix the serial thing so the code is no longer nasty. For ascii I added a method to 'read until CRLF' to the network class. For RTU I added a cheat method which tries to guess how big the packet is. If the packet is unknown size, it risks a CRC collision but just polls whatever is at the port. It totally ignores the 3.5 char times nonsense now. I tested all 3 standard pairs on localhost, with the serial ports emulated using com0com. This isn't a perfect representation of real life, but it works ok.

    • Like 1
  16. I wasn't the one to do this, but someone here evaluated it and had issues with performance -- it seemed to all be running in a relatively single-threaded way, even on the server. Also, the newer features (alarms, etc) took up a ton of memory compared to what we expected. However performance issues are always tied to a use case -- should be easy to grab an eval and test.

    Because of the way that OPC UA data models work, the NI server shows up kind of strangely in third party clients. It probably works fine for more advanced tools like kepserver, but I was playing with a fairly low level python client and the representation was weird to me. If you want to use a 3rd party client I'd also grab the eval and test it out.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.