-
Posts
763 -
Joined
-
Last visited
-
Days Won
42
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by smithd
-
I would assume its due to c# being the nicest of the top languages to use for a desktop program and everyone wants to make development more efficient. Python and JS I think you can easily throw out for large projects, PHP is web, and java's popularity is with back-end applications not desktop programs (I don't have java installed at home or at work, and I'd have to really want to use the program to install it). So that leaves c++ or c#, and I think you'll have trouble finding people who think they are more productive in 'cross platform' c++ vs c#. Linux has its own cross-platform problems. I don't know how you navigate these issues, but when I want to try out programs that are linux-only I constantly have issues where instructions for install are restricted to 1 or 2 flavors of linux (presumably 'the developers run fedora so we have instructions for fedora' and the other flavors have either no instructions, out of date instructions, or out of date builds. So I end up with several VMs of different linux types rather than 1 VM for playing around. I know bigger projects like the dbs, xilinx, etc don't have these issues, but big project have support everywhere. Something tells me that NI is reluctant to provide the source code to an alpha-alpha version of their whole new platform they've been investing in, but you're right I can't imagine why
-
I also heard phrases like "once nxg reaches parity" and similar. I can't imagine there would be any reason to keep using labview over nxg once that critical point is reached. Its a new IDE for the same language. https://github.com/ni/nidevlabs is I believe what you're looking for, although it doesn't appear to be up to date (last updated with the feb beta) Microsoft has been moving towards cross platform csharp in leaps and bounds with the purchase of xamarin, the .net core/standard separation, etc. I have read the same thing that WPF itself won't ever be supported, but I'm assuming thats something NI has kept in mind. If you play with the ngx editor and make it hang with a diagram it doesn't like, you'll see statuses along the lines of "background compilation is still running" (gist of it) which seems to support the comment that "the front-end is quite separate from the business logic". They've been gunning for the mid-range stuff with the sbRIO/SOM releases but yeah, still pretty pricey especially since with the SOM you need to make your own board anyway (or I suppose use something like what cyth was showing off on the demo floor, but they've apparently opted not to share baseline pricing so I don't know how the cost compares).
-
Is the labview queue not based on a linked list? If its an array then sure they should be similar, but if it has to walk a 1000-element linked list of individually wrapped doubles then thats obviously going to be terrible.
-
http://zone.ni.com/reference/en-XX/help/371361J-01/lvhowto/creating_cond_disable_struc/ "The VI must be in a LabVIEW project to access this symbol." Probably best of both worlds is a default case which calls the method shaun suggested and otherwise its compiled in.
- 9 replies
-
- 1
-
- crash
- troubleshooting
-
(and 1 more)
Tagged with:
-
If I'm understanding you, the console you describe is only available on pharlap systems with video hardware. cRIOs have always had to debug through something like syslog or through the serial port, and in my experience I've only ever used the serial port to monitor NI's failures, not my own. I do agree that NI should build the syslog stuff into the product and it would be nice if there was a clear and obvious way to say 'write this output string to any connected ssh session'...but its not like the current level of functionality is a change for most cRIO users so I can't imagine this is a priority.
- 17 replies
-
Thats the one I had used forever, but the post I put above is a specific binary+configuration for wrapping the built-in code on linux rt. If you click on August 2016 pdf 1559 kb and go to pg 4 you'll see what I'm talking about where the log gets routed automatically to the event viewer in system web server. As for viewers with different features, I've used and like Kiwi: http://www.kiwisyslog.com/kiwi-syslog-server Its got a ton of features for filtering and routing and such, and it has a nice web interface. If you just want to monitor 1 node then maybe you don't need something like that, I'm sure you can find a free option out there like https://github.com/tmanternach/WebSysLog or https://github.com/rla/mysql-syslog-viewer in conjunction with http://www.rsyslog.com/ Personally though I think the viewers end up getting in the way, so I normally just open the log in notepad++ and ctrl+f for stuff.
- 17 replies
-
- 1
-
How to find a good LabVIEW/python software consultant
smithd replied to joshxdr's topic in LabVIEW General
Well for NI, you can search on ni.com/alliance: https://partners.ni.com/partner_locator/search.aspx -
https://forums.ni.com/t5/NI-Linux-Real-Time-Documents/Getting-the-Most-Out-of-your-NI-Linux-Real-Time-Target/ta-p/3523211 syslog. I believe you can automatically view it from the web interface if you walk through their steps. You can also open up a console and just type "cat syslog" every few seconds to view it semi-real-time. also: I don't know how but I'm willing to bet you can make some sort of pipe redirect on linux where everything written to syslog also gets displayed on the console. also: I don't remember how but I could swear there is a file in the /dev folder which will output to a ssh session via putty, and so you could have labview open a write session to one of those files and get it to display on your shell. probably simpler is to try to encapsulate the core parts of the code and debug outside of veristand before putting it into the engine.
- 17 replies
-
- 1
-
Network streams - multiple windows executables to compact RIO target
smithd replied to parsec's topic in LabVIEW General
To be clear this should be on the desktop side. That is, cRIO: reader <id1-N> writer not selected app1: writer //localhost:<random2>/<random3> reader //<crio>/<id1> app2: writer //localhost:<random4>/<random5> reader //<crio>/<id2> etc.. It sounds like you made all contexts unique anyway but that should work. -
View Executable on Web browser
smithd replied to Cat's topic in Remote Control, Monitoring and the Internet
Forgive me if I'm going too basic but I figure it can't hurt...I can't quite tell from your post where you're at. NI used to have a product called remote front panels (I pretend it doesn't exist anymore in order to feel safe) which was a tool which converted your normal VI front panel into a little applet which could run in a browser. The preferred way is to use a web server to host your application and explicitly expose features through a well-defined and structured web interface. The basic interface is through http which consists of 4 main request-response types: get, post, put, and delete. Another mechanism is through something called websockets, which basically piggybacks off of http to create a full duplex data packet layer over TCP. To communicate with a web server you need some client. You could write this in a standard language like labview or c# but because no executable software can be loaded on the computer you're limited to either a system already there like flash or javascript in a browser. Assuming you have a soul and thus don't want flash, you are left with exactly 1 option for your client, html/css/javascript. These files will be hosted as static files in your web server (or could be dynamically generated too) and retrieved using a http get request. Once downloaded into a browser, an html page will typically load a javascript file for execution, and that javascript file can issue ajax requests (which just means the javascript can ask your web server for more resources). The javascript can then manipulate the html to display whatever you want to your user. Fortunately, for the basic case, various people have already done this for you (hooovahh's link is a good one). Depending on where you are on the learning scale, NI did an ok job of writing some of this stuff here: https://forums.ni.com/t5/LabVIEW-Web-Development/Web-Services-Getting-Started-Series/ta-p/3498686 https://forums.ni.com/t5/LabVIEW-Web-Development/Web-Services-Publishing-Data-Series/ta-p/3501626 https://forums.ni.com/t5/LabVIEW-Web-Development/Web-Connectivity-Options-in-LabVIEW/ta-p/3501679 (all part of the same community group, different hub pages) -
Network streams - multiple windows executables to compact RIO target
smithd replied to parsec's topic in LabVIEW General
Assuming I'm understanding you correctly, you need to read and comprehend this, basically: http://zone.ni.com/reference/en-XX/help/371361N-01/lvconcepts/endpointurls/ But to point you to the right place: " Note Only one application on each computer can specify the default context. Therefore, if you have multiple applications on a single computer that use network streams, you must assign a URL instead of a name to each endpoint in those applications " So your streamname shouldnt be <randomnumber> it should be //localhost:<random1>/<random2> I don't know how fast your code starts up but if you are going to use a shared variable I'd suggest setting your endpoint create to a fast timeout (1-2 sec) in case two exes try to claim the endpoint at the same time. -
Network streams - multiple windows executables to compact RIO target
smithd replied to parsec's topic in LabVIEW General
The context is just any string, its not the application name. It also only matters if there is >1 exe running on a target, so you never need it for the cRIO side. On windows using the application name only works if its unique -- simpler to pick a random number. Since streams are 1:1 if you want to connect multiple senders simultaneously you need either multiple hardcoded reader streams on the cRIO or you need to define your own listen+accept scheme just like TCP. I would just use TCP but if you like streams you would do this: on crio create a writer stream called "streamaccept" on windows machine connect to streamaccept using reader endpoint <random1>:<random2> on crio, create <random3> and send it over the streamaccept stream to windows, then launch a process to handle that stream (reader <random>) on windows receive <random3> from streamaccept and create a new write endpoint <random4>:<random5> which connects to <crio>/<random3>. Close the <random1>:<random2> endpoint connection established, send data -
Changing available inputs based on user selection
smithd replied to ocmyface's topic in User Interface
You don't need to use classes but an advantage here is that you can pass around the different data sets as different objects, rather than having to convert everything back to one big cluster with all the options. That having been said I'm assuming that you're basically transmitting configuration as string commands to the daqs so it may be that you just use an array of strings for configuration and then you have a set of UIs which interpret that string configuration just like the daq does. An example of a configuration as you describe can be seen here, using classes: http://www.ni.com/example/51881/en/ For each device you can right click and select to add a current channel or a voltage channel or whatever and when the user clicks on that item in the tree it shows the UI associated with that class. If you have a finite number of classes and they all work using the same format of configuration strings, there is nothing to say you couldnt do something similar with plain VIs, and you just select the VI to add to the subpanel depending on the string that says "type=thermocouple" or whatever. -
Since I didn't see it mentioned elsewhere, it sounds like you might be in need of https://en.wikipedia.org/wiki/Composition_over_inheritance A measurement isn't a type of limit check, and a limit check isn't a type of execution behavior. It may be that you want to split this up into executionstep.lvclass which contains an instance of thingtoexecute.lvclass (your measurement) and an instance of analysistoperform.lvclass (your limit checks), or something along those lines.
- 30 replies
-
-1073676294 should be a warning, not an error, and is expected. Most protocols are CRLF delimited, and so if you request say 100 bytes and get a full 100 bytes rather than hitting a CRLF, it might mean you didn't read a big enough packet. Were I to develop VISA I probably wouldn't put it in every read call but hey. The CRC error is weird, I'd let porter answer that one if he can. Do you get a similar CRC error with the v1.1.5.39 lib?
-
It looks like you switched your register to read between code editions. Earlier you were requesting 8193 (0x2000) and now you are requesting 0x07CF which doesn't exist according to the manual. The timeout just means the device never responded, which likely means your device never received the message or decided to ignore the message (for example if slave address doesnt match) At this point, I'd suggest simplifying this down a bit. Drop down a visa open, then visa configure serial, potentially a property node, then a write and a read, then a close. (similar to the example in post 1 here https://forums.ni.com/t5/Instrument-Control-GPIB-Serial/Read-data-from-IED-using-MODBUS-RS485/td-p/1979105) For the write, wire up a string set to hex mode set as "01 08 00 00 12 34 ED 7C" For the read, specify a length of 8. You should receive an exactly matching string. This is according to pdf pg 89 (manual 4-15, section 4-4-4) of the manual you attached, which specifies an echo test. This way you can fiddle with the settings on the serial port until you get something back, at which point you should be safe to transfer those settings to the other two libraries and try again.
-
thats more like it, now you can see where it writes the command (28) and then reads back up to 513 bytes in response (max size of modbus ascii). the thing i missed before is you selected ascii, when your device seems to use modbus RTU. If you run that code again with ASCII changed to RTU, you should see the write (28) change as well to become a binary string. Read (29) will also change size.
-
Yes, that library is mostly unsupported now. Thats part of the reason I tend to recommend Porter's implementation (http://sine.ni.com/nips/cds/view/p/lang/en/nid/214230) for master-only applications. He modeled the API after that one (1.1.5.39) but its totally open source (no locked diagrams) and he made some improvements to the serial handling.
-
Note: by the time I got to the end I saw the issue but I'm leaving my thoughts here in case they help I can't recall exactly but I believe that is still part of the init function (basically its a flush on init, get all bytes at port=0 and read that many bytes; note flush I/O buffer right afterwards which is the end of init). That is, that part of the trace makes sense (steps 1-12). During the read, the RTU library must poll the port, which is why you see its getting attributes over and over again (in this case, bytes at port). Then close gets called, but the library continues to poll bytes at port, which makes no sense to me. Just based on the trace, I would assume there is a race condition but in the code there doesn't appear to be one. Finally, there is no *write*, which is important given that modbus is a request-response protocol. The issue: It looks like in your sample code above you instantiate a new serial *slave* rather than a master. Since you want read values from another device, you want to create a serial master. That is why the trace is so bizarre, there *is* a race condition. Init (1-12) is called inline, and launches a background thread to poll the port (13-16), then your VI reads from the slave memory (local access of a data value reference) and immediately closes (step 17). I'm guessing the background thread then reopens the port (step 18) and continues polling (19-26) Solution should be to change over to a serial master instance.
-
It looks like they deliberately made their terminology confusing. I cant imagine how they managed it unless they were actively trying First, obligatory intro to modbus document if you arent familiar: http://www.ni.com/white-paper/7675/en/ Skipping ahead to section 5 in your pdf you can see what you put in those fields. It looks like the easiest thing to check would be the 'status' field, so you'd enter either: 0x0002 for starting address and 4 for bytes to read, or 0x2001 for starting address and 2 for bytes to read If you do this and get a visa error or error 56, it means you failed to communicate with the device. Depending on the library you are using there is an error range associated with modbus errors, meaning communication was successful but the device rejected your request. The object-based libraries return the modbus error code as a labview error code with an offset (eg error 100001 = modbus error 1) with the issue being to figure out what that offset is (varies by library). Pg 4-9 has the error codes possible for your device, which in this case are just two standard modbus errors 0x02 and 0x03. If you get 56 or similar, be sure to check pg 1-2 for the right serial settings to use. It looks like your device might have a similar issue to what I described here with regards to # of bits. The default configuration of your device does not appear to match the modbus protocol, instead it seems to match their custom protocol. However to their credit they take note of this in the * below the table in section 1-1-6. Finally, if the NI one fails you, and you just need master functionality, try this one: http://sine.ni.com/nips/cds/view/p/lang/en/nid/214230
-
PNG conversion takes a long, long time so you should use the imaq image to string functions to see if the png conversion is taking a while with your particular images. I found that the quality setting didnt help performance much but you can try it (pngs here are lossless so its quality of compression). if your drive is a spinning disk you must not write each image to its own file, you must pack the images into a single file and unpack later. Overhead for making files on a rust drive is killer, probably cut my throughput by a factor of 8. I made an in memory zip creator for this purpose but unfortunately i cannot share it. You can do much the same by creating png strings and writing them to a binary file. For an ssd this isnt an issue. The worker thread is same as producer consumer but with N (1/cpu here) consumers to each producer. You can do this with 'start asynchronous call', a parallel for loop, or just dropping down several copies of the same function.
-
With RTU in particular I've found that there are a lot of devices that don't follow the spec with regard to parity, stop bits, etc. Since its RTU I'd verify the settings from your device's spec sheet and check them against the modbus setting. Section 2.5.1 here http://www.modbus.org/docs/Modbus_over_serial_line_V1_02.pdf Bits per Byte: 1 start bit, 8 data bits, 1 bit for parity completion, 1 stop bit Even parity is required, other modes ( odd parity, no parity ) may also be used. In order to ensure a maximum compatibility with other products, it is recommended to support also No parity mode. The default parity mode must be even parity. Remark : the use of no parity requires 2 stop bits. Theres a decent number of devices out there which use no parity and 1 stop bit, making them not modbus devices. To resolve this with the modbus library, you must use a visa property node to re-set the stop bit after initialization and before talking to your device.
-
There are some examples for imaq with worker threads, I'd take a look at those. I don't think that producer consumer will particularly help since imaq basically gives you that for free if you enable buffering in the driver. What could help is if you had N workers which would give you the ability to use all CPU cores for processing, which could allow you to write your images as PNGs (reducing image size) if your disk is the bottleneck. I would suggest using a disk benchmark like (http://crystalmark.info/software/CrystalDiskMark/index-e.html) to see how fast you can write to disk, then determine how big your images are and how many bits, and that will tell you if your disk is the bottleneck (or if it will be, even if it isn't right now). Also you mentioned buffering in ram. If this is a finite acquisition then I'd definitely look into that. Maintain a set of N imaq references and use a new reference for each incoming frame. However at 100 fps thats a lot of memory.
-
I16 Image into U8 Image only lower byte
smithd replied to @lex's topic in Machine Vision and Imaging
agreed, unclear. The simplest answer is that if you just want to play with the raw pixel values you can use imaq image to array and array to image. That way you can mess with the integers until you get the result you want and then figure out if there is an imaq function to do it fast. -
Well this is odd..I dont think I saw this as unread but I just looked at notifications and here it was. Hrm... Well the long story short is that it sounds like the solution of splitting the program in two is a good fit. To quickly respond, though... The advantage is that it can be displayed (in theory) directly in any browser since its just using http features. I do have a lot of websocket built out as well, so in the existing application thats how I'm transferring images. For display, yes, its the same (I'm just using the ROI and overlay features of imaq). Whats more difficult is all of the low-level UI stuff related to, for example, drawing a box around a feature and having it show up. Its not hard so much as bug-prone and time consuming to develop from scratch. What I was trying to say is that the way those features are used is totally different. The histogram, drawings, etc are all a part of the offline-mode of operations so I can easily use something (like labview) that has slow rendering capability, so long as I can make a faster-rendering application that does the simple case (A+B, with no user interaction except to resize windows). The images themselves are 3 MB raw or smaller, theres just a ton of them. This is another example of how the use cases are different. Without going into detail the best way to convey the difference is to imagine a large piece of machinery -- when the system is being tweaked, people only want to see their small part. But when the system is operational the entire system must be monitored simultaneously but with a lower level of detail. Thats why I think this split-program will work. Using something with GPU rendering for the high-throughput mode and using labview as a development shortcut for the low-throughput mode.