-
Posts
763 -
Joined
-
Last visited
-
Days Won
42
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by smithd
-
Yeah they are still available, but I'm saying I've seen the NI guys on the linuxrt forum suggest target compilation. Can't find it now of course. In actuality the compilation part isn't much different -- running vim over ssh on an arm crio is approximately as painful as using eclipse. The part that is more difficult is the configure step. ZeroMQ has a ton of optional settings (which event mode to use? select, poll, epoll, kqueue... security? they have an NaCl based encryption scheme...etc.). You can technically figure out each one, but copying to the target and running configure is a looooot easier.
- 72 replies
-
- networkcommunications
- datasocket
-
(and 1 more)
Tagged with:
-
LabVIEW NXG - when will we start using it
smithd replied to 0_o's topic in Development Environment (IDE)
Woops, I decided to kill off my rant but I guess you were already responding. I guess you're right, but I'd like to revise my comment -- I don't want to have to flip back and forth between byte arrays and strings. I want to be able to index characters out of strings, and I want to be able to search for a subset within a byte array, etc. The shift towards saying all strings are unicode without enhancing the features associated with binary byte array seems like a mistake. -
LabVIEW NXG - when will we start using it
smithd replied to 0_o's topic in Development Environment (IDE)
I'm fairly curious about these statements -- is the issue a lot of small incompatibilities like the ones I mentioned, or something more major? Lol -- python -
LabVIEW NXG - when will we start using it
smithd replied to 0_o's topic in Development Environment (IDE)
I'd rather they do tls You don't normally need ssh to be fast, so using something like this https://github.com/sshnet/SSH.NET/ or calling putty using system exec isn't a huge deal. -
Depends on how they are trying to compile it. Back in the 2013 time frame I got it working the way that was being recommended at the time, which was through the eclipse cross-compile tools. Unfortunately, when NI moved to the new community system all that got deleted (as far as I can tell). The more recent recommendation I've seen is to just compile it on the target. I can confirm that works, at least a year or so ago. You just copy the whole source directory onto the cRIO and run the commands specified in the source (configure, make, make install probably). Before you can run those commands you need to have the build tools installed, which you can get by 'opkg install packagegroup-core-buildessential' (https://forums.ni.com/t5/NI-Linux-Real-Time-Discussions/Building-software-for-NI-Linux-RT/td-p/3372279/page/3). You may find that it says a library is missing, in which case you'll usually need to look in opkg for the -dev version of the library in question. For example, I compiled drivers for a USB device so I needed to find libusb-dev and then some libusb compatibility layer as well. I doubt you'll find using the 9038 any easier, as its still the ni flavor of linux.
- 72 replies
-
- networkcommunications
- datasocket
-
(and 1 more)
Tagged with:
-
LabVIEW NXG - when will we start using it
smithd replied to 0_o's topic in Development Environment (IDE)
Probably more like 2025 or 2030 I thought, unless you mean 'when they stop caring about current lv at all' -
LabVIEW NXG - when will we start using it
smithd replied to 0_o's topic in Development Environment (IDE)
I have it installed, last time I tried to migrate even a small part of a shared library everything broke. I need RT support, subpanel support, I'm not looking forward to figuring out dll calls, and I rely on drjdp's json stuff which last I heard, I thought didn't work in Nxg due to how variant parsing functions changed. Now that the strings are all supposed to be unicode, it scares me a bit to upgrade any binary-based parsing so we'd need to spend time validating that. They changed VI server open VI from having an input specifying normal/call and forget/call and collect so now its hardcoded per node, which may cause failures. Last time I tried to use it for a small, new project I gave up because it was too slow and I couldn't handle it. -
I've used that in the past too, but i was creeped out today when I looked at it and saw that the downloads were http instead of https. The reason I noticed was that windows claimed the 32-bit version had a trojan. I'm assuming it was a false detection, but even if thats the case you shouldn't be serving up installers over an unsecure connection.
-
DB browser seems to only support png/bmp. I found a video suggesting jpeg support too, but the link said only via nightly builds
-
This is a pretty important piece of information. I took a quick look at their page (unfortunately their support forums are down, and there is no way I'm installing that software on my computer, so this is all I can go on) and it seems to say it supports bmp and jpg images. from your posts, you have so far provided it with: The pixel values of an image, type cast directly into a string with no metadata The NI vision software flattened representation of an image. I'd suggest the following: If you have a bmp or jpg image, use the read from binary file function with a elements to read set to -1 (to read the whole file as one big string) and pass that directly to sqlite. You may need to make sure that sqlite administrator knows the type, but i think bmp and jpg both use linux-style 'lets hide the extension inside the file' encoding, so its probably not necessary. If you have an imaqdx image in memory, use writestring (https://zone.ni.com/reference/en-XX/help/370281AD-01/imaqvision/imaq_write_string/) to generate a bmp or jpg string, and pass that directly to sqlite. To give a slightly more complete explanation, if you read the first few posts on this page we discuss the sqlite type system, which is very flexible -- in this case, everything is a string. You told everyone you wanted to store images in the database, which you were successfully doing in the formats described above. You did not explain that you had a very specific tool in mind for reading those images back out, and that is where the problem lies. It was expecting the string to be a specific format, and you were storing it not as that format.
-
Transfer Image from cDAQ-9133 Linux RT to cDAQ-9133 WES7
smithd replied to viSci's topic in LabVIEW General
Shouldn't such absurd rules come with a budget? Like, no you can't take the machine out of the secure area but here why don't you just buy another instead? -
NI Modbus API on GitHub
smithd replied to Porter's topic in Remote Control, Monitoring and the Internet
Lol sounds like intended behavior to punish those who ignore errors I think the missing part is that Tanner also added an incrementing counter on the send side, so you should see errors on every subsequent read because the counter will never catch up. -
NI Modbus API on GitHub
smithd replied to Porter's topic in Remote Control, Monitoring and the Internet
Ah yes thats much nicer. I didn't know it could be broken up like that -
NI Modbus API on GitHub
smithd replied to Porter's topic in Remote Control, Monitoring and the Internet
I dont think so, I forgot it was in there. Seems like a good idea though. Just needs to be added to TCP Master::Initialize Master and TCP Slave::wait on listener. This is illegal per the spec. Its obviously not difficult to change this, but...its also not hard just to set the value after initializing the modbus library. I think I see what you mean and attached an implementation. Looks to be about 3x faster (edit: 2x with debug off) to read from the lookup vs calculating it out. Just a thought tho, it probably makes other code around it slower by completely trashing your CPU cache (its about 1/4 of the L2 cache on a zynq-based cRIO). If visa times out it returns whatever is in the buffer. If it happens to time out mid-packet (as may be the case on linux rt or with usb-serial adapters) then you have half the packet in your read loop and half out on the bus. This isn't important for the master, since if you time out you pretty much have to flush the buffer, but for the slave its sitting there waiting for data forever, so dealing with partial packets is annoying. Also, this was like 6 years ago so I may be wrong, but at the time I think one of my goals was to make the serial and tcp functions act the same. Thats why TCP also uses the 'buffered' feature. I do, thats why I make sure to enable the term char. However it has to be CRLF, LF by itself is not acceptable. You may very well ask in which situation you would get a LF by itself -- I don't know, but I do know that its a 5c chip wiggling the voltage on some wires back and forth at an absurd speed, so I figure it can't hurt to check Woops. That method really only exists to make sure its not broadcast, so in the 99% use case it happens to work. Otherwise ascii doesn't care. Fixed, though. I don't actually understand what the problem was with this. As you said in your comment on that issue, each access should be synchronous for a given master or slave, so there is no such thing as multiple transactions outstanding on the connection. The transaction ID check just sort of verifies that. If an error occurs, you must close the connection and reset. I can't think of any reason that would not be the right response, can you? I'm also confused by this because that part of the code was implemented by Tanner, who is the person who posted that issue to yours, so presumably he thought that code fixed the issue? Note that this behavior (close and reopen) is different from serial (wait, flush the buffers, and hope things start to work again), not because the serial way is better but because serial is a 5c chip twiddling the voltage on some wires. The serial version has no connection to close. The idea of the pluggable transport was to support any modbus adu over any network type. I've definitely heard of RTU over TCP, this would be to support ascii over tcp the real answer is that it was easy and it kept the implementations mirrored. crc bench.vi crc.vi -
I can't see what you're actually doing but I'd say this seems expected...sqlite has somewhat squishy types. https://www.sqlite.org/datatype3.html "Any column in an SQLite version 3 database, except an INTEGER PRIMARY KEY column, may be used to store a value of any storage class." "Any column can still store any type of data. It is just that some columns, given the choice, will prefer to use one storage class over another. The preferred storage class for a column is called its "affinity"." If that makes you feel bad, just remember that almost every interaction you have with your bank relies on a technology whose type system is pure madness.
-
Transfer Image from cDAQ-9133 Linux RT to cDAQ-9133 WES7
smithd replied to viSci's topic in LabVIEW General
You might just ask your field sales. They can, on occasion, do nice things. -
Sending sensor data from a TCP socket in python(RPI) to LABVIEW
smithd replied to Diegodemf's topic in LabVIEW General
However this will increase latency. That is the tradeoff. To be reasonably efficient with the network you'd need to transfer bursts of ~20 samples over wired ethernet. How real-time is real time? What does the labview code do with the information? Then my answer would be that you should write to a file, but as above it depends on exactly how important it is. What happens if you lose a sample? What happens if the network disconnects for 30 seconds? Yes, you would have to split the file up yourself. The strategy I follow is to have a directory /data with a file /data/temp.ext and a subdirectory /data/done which contains data1.ext, data2.ext, etc. I then use the built-in labview FTP (I believe the connectivity palette but I'm not sure) functions to fetch files out of the /data/done folder. That way I don't accidentally pull off a half-finished file. This appears to be automated in python: https://docs.python.org/3/library/logging.handlers.html#rotatingfilehandler and https://www.blog.pythonlibrary.org/2014/02/11/python-how-to-create-rotating-logs/ If you don't like FTP for some reason, someone put together a winscp wrapper library: https://lavag.org/topic/20474-free-labview-winscp-library/?tab=comments#comment-124582 -
There does not appear to be anything standard about that 'normal' file. I don't think I've ever seen such a file before. To give you a clear picture of the complexity of what you are asking, and why drjd is quite right....just looking at that example I would assume the following about your file: The entire file is a single json object The first line is the 1 and only key within that object there is a CRLF between the key and its value The rest of the file is the value of the single key The single element is itself an object This object consists of key-value pairs separated by "-" and delimited by CRLF Every value is a string Your file cannot contain strings which contain CRLF Following these rules, your json output would be: {"Details": {"Name":"abc", "Organization": "anonymous", "Location": "xyz"}} If that is what you want, then my only answer is that you will have to procedurally parse the entire file from a string and construct the json object manually. If that is not what you want, then you should figure out the rules for your "normal simple" file. Once you figure out the rules, you will still have to manually write the code because that isn't a standard computer-parse-able format that I am aware of. If there are no rules, then you may need to consider a different strategy for whatever it is you are actually trying to accomplish. For example if you are making your files into json to make them easier for a computer to parse, then you'd be better of just shoving all these "normal simple" text files into elastic search or splunk and never look at them again.
-
the dev suite version worked for me without issue: http://www.ni.com/en-us/support/downloads/software-products/download.developer-suite-all-inclusive.html Downloaded about 5x faster than the web installer if you're also having trouble
-
Still broken, don't install To be clear I used the standalone installer for 32 bit since that took 3 minutes to download and the web installer would have taken 28 hours.
-
Sending sensor data from a TCP socket in python(RPI) to LABVIEW
smithd replied to Diegodemf's topic in LabVIEW General
I'm super confused...it sounds like you just want to stream data, right? So latency isn't important -- just throughput. So turning off nagle won't help. However you're introducing a latency issue by having LabVIEW request data by sending a string with "?" in it? The stated goal is 7 kHz * 50 bytes, which is 2.8 Mbit/s -- hardly anything. But when you introduce round trip communication into the streaming pathway, you're introducing slow performance. Can you clarify if there is an actual need for this back and forth? How important is reliability? If latency is unimportant and reliability is very important, I'd suggest writing your data to disk on the raspberry and then ftp/scp/webdav-ing it off. If reliability is not important, and this is on a local network, consider UDP. -
This one also has the full suite web installer: http://www.ni.com/download/web-based-installer-labview-development-system-2018/7924/en/ and the full driver dvd http://www.ni.com/white-paper/55036/en/
-
NI Modbus API on GitHub
smithd replied to Porter's topic in Remote Control, Monitoring and the Internet
I got back to looking at the code today, I had forgotten but it looks like I did make the object thread-safe. All of the request-response calls are mutexed. For giggles, I spent some time on it today and just pushed a few changes to a fork here: https://github.com/smithed/LabVIEW-Modbus-API I didn't like how my old team did the transaction id fix, so thats one commit. The second commit is to fix the serial thing so the code is no longer nasty. For ascii I added a method to 'read until CRLF' to the network class. For RTU I added a cheat method which tries to guess how big the packet is. If the packet is unknown size, it risks a CRC collision but just polls whatever is at the port. It totally ignores the 3.5 char times nonsense now. I tested all 3 standard pairs on localhost, with the serial ports emulated using com0com. This isn't a perfect representation of real life, but it works ok. -
Definitely not crazy, I've seen this a few times before. I'm curious though what happens if you put the indicator on the connpane?
-
OPC UA experience
smithd replied to FixedWire's topic in Remote Control, Monitoring and the Internet
I wasn't the one to do this, but someone here evaluated it and had issues with performance -- it seemed to all be running in a relatively single-threaded way, even on the server. Also, the newer features (alarms, etc) took up a ton of memory compared to what we expected. However performance issues are always tied to a use case -- should be easy to grab an eval and test. Because of the way that OPC UA data models work, the NI server shows up kind of strangely in third party clients. It probably works fine for more advanced tools like kepserver, but I was playing with a fairly low level python client and the representation was weird to me. If you want to use a 3rd party client I'd also grab the eval and test it out.