Jump to content

smithd

Members
  • Posts

    763
  • Joined

  • Last visited

  • Days Won

    42

Posts posted by smithd

  1. 14 hours ago, ShaunR said:

    Silverlight was definitely the stake through the heart of Web UI bulder just as (I think) C# will be the same for LabVIEW. I've been actively moving over to Linux for a while now and .Net has been banned from my projects for donkeys' years. So doubling down with a Windows only .NET IDE is a bit perplexing. Especially since at this point, I consider Windows pretty much a legacy platform for T&M, IoT, DAQ and pretty much everything else LabVIEW is great for. When Windows 7 is finally grandfathered; Windows will no longer be on my radar at all.

    I would assume its due to c# being the nicest of the top languages to use for a desktop program and everyone wants to make development more efficient. Python and JS I think you can easily throw out for large projects, PHP is web, and java's popularity is with back-end applications not desktop programs (I don't have java installed at home or at work, and I'd have to really want to use the program to install it). So that leaves c++ or c#, and I think you'll have trouble finding people who think they are more productive in 'cross platform' c++ vs c#. 

    Linux has its own cross-platform problems. I don't know how you navigate these issues, but when I want to try out programs that are linux-only I constantly have issues where instructions for install are restricted to 1 or 2 flavors of linux (presumably 'the developers run fedora so we have instructions for fedora' and the other flavors have either no instructions, out of date instructions, or out of date builds. So I end up with several VMs of different linux types rather than 1 VM for playing around. I know bigger projects like the dbs, xilinx, etc don't have these issues, but big project have support everywhere.

    9 hours ago, lordexod said:

    As for "Web UI Builder" it should already be open source.

    Something tells me that NI is reluctant to provide the source code to an alpha-alpha version of their whole new platform they've been investing in, but you're right I can't imagine why :P

  2. On 5/25/2017 at 11:13 AM, smarlow said:

    I kept hearing phrases like "as you migrate", and "when you migrate" rather than "if" you migrate.

    I also heard phrases like "once nxg reaches parity" and similar. I can't imagine there would be any reason to keep using labview over nxg once that critical point is reached. Its a new IDE for the same language.

    On 5/24/2017 at 7:43 AM, ShaunR said:

    Can we make our own native controls? 

    https://github.com/ni/nidevlabs is I believe what you're looking for, although it doesn't appear to be up to date (last updated with the feb beta)

    On 5/24/2017 at 4:25 AM, smarlow said:

    One of the concerns I have over the possibility of being herded into NXG is it's based on the Windows Presentation Foundation (WPF), and so is a Windows-only program, there is no Mac or Linux version, and probably never will be. 

    Microsoft has been moving towards cross platform csharp in leaps and bounds with the purchase of xamarin, the .net core/standard separation, etc. I have read the same thing that WPF itself won't ever be supported, but I'm assuming thats something NI has kept in mind. If you play with the ngx editor and make it hang with a diagram it doesn't like, you'll see statuses along the lines of "background compilation is still running" (gist of it) which seems to support the comment that "the front-end is quite separate from the business logic".

    On 5/25/2017 at 5:34 PM, MarkCG said:

    It would be cool if LabVIEW could gain some ground in the embedded world, instead of becoming more and more a high performance high cost niche.

    They've been gunning for the mid-range stuff with the sbRIO/SOM releases but yeah, still pretty pricey especially since with the SOM you need to make your own board anyway (or I suppose use something like what cyth was showing off on the demo floor, but they've apparently opted not to share baseline pricing so I don't know how the cost compares). 

  3. 3 hours ago, mje said:

    Queue vs DVR shouldn't make much difference if all other implementation details are equivalent since both use the same underlying synchronization method. That said the synchronization will prevent either method from being as peformant as a native array implementation if access speed is a concern. 

    Is the labview queue not based on a linked list? If its an array then sure they should be similar, but if it has to walk a 1000-element linked list of individually wrapped doubles then thats obviously going to be terrible.

  4. 5 hours ago, hooovahh said:

    Yeah for me the conditional disable says that OS is not a defined symbol and so it defaults to the default case which here is Linux.  NI has lots of multiplatform code so you might want to look into how they do OS detection.

    http://zone.ni.com/reference/en-XX/help/371361J-01/lvhowto/creating_cond_disable_struc/

    "The VI must be in a LabVIEW project to access this symbol."

    Probably best of both worlds is a default case which calls the method shaun suggested and otherwise its compiled in.

    • Like 1
  5. 2 hours ago, Zyl said:

    I perfectly understand that there are many options to trace that comes with Linux. But I don't think that NI not picking one solution is a good idea. When you use for years the RT targets and get use to use the console to debug, when you switch to Linux RT you've got nothing anymore... or at least nothing immediately operational...

    If I'm understanding you, the console you describe is only available on pharlap systems with video hardware. cRIOs have always had to debug through something like syslog or through the serial port, and in my experience I've only ever used the serial port to monitor NI's failures, not my own. 

    I do agree that NI should build the syslog stuff into the product and it would be nice if there was a clear and obvious way to say 'write this output string to any connected ssh session'...but its not like the current level of functionality is a change for most cRIO users so I can't imagine this is a priority.

  6. 7 hours ago, rolfk said:

    Well as far as the Syslog functionality itself is concerned, we simply make use of the NI System Engineering provided library that you can download through VIPM. It is a pure LabVIEW VI library using the UDP functions and that should work on all systems.

    As to having a system console on Linux there are many ways for that which Linux comes actually with, so I'm not sure why it couldn't be done. The problem under Linux is not that there are none, but rather that there are so many different solutions that NI maybe decided to not use any specific one, as Unix users can be pretty particular what they want to use and easily find everything else simply useless. 

    Thats the one I had used forever, but the post I put above is a specific binary+configuration for wrapping the built-in code on linux rt. If you click on August 2016 pdf 1559 kb and go to pg 4 you'll see what I'm talking about where the log gets routed automatically to the event viewer in system web server.

     

    As for viewers with different features, I've used and like Kiwi: http://www.kiwisyslog.com/kiwi-syslog-server
    Its got a ton of features for filtering and routing and such, and it has a nice web interface. 

    If you just want to monitor 1 node then maybe you don't need something like that, I'm sure you can find a free option out there like https://github.com/tmanternach/WebSysLog or https://github.com/rla/mysql-syslog-viewer in conjunction with http://www.rsyslog.com/

    Personally though I think the viewers end up getting in the way, so I normally just open the log in notepad++ and ctrl+f for stuff.

    • Like 1
  7. https://forums.ni.com/t5/NI-Linux-Real-Time-Documents/Getting-the-Most-Out-of-your-NI-Linux-Real-Time-Target/ta-p/3523211

    syslog. I believe you can automatically view it from the web interface if you walk through their steps. You can also open up a console and just type "cat syslog" every few seconds to view it semi-real-time.

    also: I don't know how but I'm willing to bet you can make some sort of pipe redirect on linux where everything written to syslog also gets displayed on the console.

    also: I don't remember how but I could swear there is a file in the /dev folder which will output to a ssh session via putty, and so you could have labview open a write session to one of those files and get it to display on your shell.

    probably simpler is to try to encapsulate the core parts of the code and debug outside of veristand before putting it into the engine.

    • Like 1
  8. 49 minutes ago, parsec said:

    I have tried this but I still can't get it to work. I am creating two random strings, so the reader name on the Crio is //localhost:random1/random2

    To be clear this should be on the desktop side.

    That is,

    cRIO: reader <id1-N> writer not selected

    app1: writer //localhost:<random2>/<random3>

    reader //<crio>/<id1>

    app2: writer //localhost:<random4>/<random5>

    reader //<crio>/<id2>

    etc..

    It sounds like you made all contexts unique anyway but that should work. 

  9. 8 hours ago, Cat said:

    Computer A: Running a LabVIEW application -- call it TestWeb.exe

    Computer B: Running Explorer/Chrome/Firefox. No software can be loaded or installed on this computer, including any NI software.

    Forgive me if I'm going too basic but I figure it can't hurt...I can't quite tell from your post where you're at.

    NI used to have a product called remote front panels (I pretend it doesn't exist anymore in order to feel safe) which was a tool which converted your normal VI front panel into a little applet which could run in a browser.

    The preferred way is to use a web server to host your application and explicitly expose features through a well-defined and structured web interface. The basic interface is through http which consists of 4 main request-response types: get, post, put, and delete. Another mechanism is through something called websockets, which basically piggybacks off of http to create a full duplex data packet layer over TCP.

    To communicate with a web server you need some client. You could write this in a standard language like labview or c# but because no executable software can be loaded on the computer you're limited to either a system already there like flash or javascript in a browser. Assuming you have a soul and thus don't want flash, you are left with exactly 1 option for your client, html/css/javascript. These files will be hosted as static files in your web server (or could be dynamically generated too) and retrieved using a http get request. Once downloaded into a browser, an html page will typically load a javascript file for execution, and that javascript file can issue ajax requests (which just means the javascript can ask your web server for more resources). The javascript can then manipulate the html to display whatever you want to your user. Fortunately, for the basic case, various people have already done this for you (hooovahh's link is a good one).

    Depending on where you are on the learning scale, NI did an ok job of writing some of this stuff here:
    https://forums.ni.com/t5/LabVIEW-Web-Development/Web-Services-Getting-Started-Series/ta-p/3498686
    https://forums.ni.com/t5/LabVIEW-Web-Development/Web-Services-Publishing-Data-Series/ta-p/3501626
    https://forums.ni.com/t5/LabVIEW-Web-Development/Web-Connectivity-Options-in-LabVIEW/ta-p/3501679

    (all part of the same community group, different hub pages)

  10. Assuming I'm understanding you correctly, you need to read and comprehend this, basically:
    http://zone.ni.com/reference/en-XX/help/371361N-01/lvconcepts/endpointurls/

    But to point you to the right place:
    " Note  Only one application on each computer can specify the default context. Therefore, if you have multiple applications on a single computer that use network streams, you must assign a URL instead of a name to each endpoint in those applications "

    So your streamname shouldnt be <randomnumber> it should be //localhost:<random1>/<random2>

    I don't know how fast your code starts up but if you are going to use a shared variable I'd suggest setting your endpoint create to a fast timeout (1-2 sec) in case two exes try to claim the endpoint at the same time.

  11. The context is just any string, its not the application name. It also only matters if there is >1 exe running on a target, so you never need it for the cRIO side. On windows using the application name only works if its unique -- simpler to pick a random number.

    Since streams are 1:1 if you want to connect multiple senders simultaneously you need either multiple hardcoded reader streams on the cRIO or you need to define your own listen+accept scheme just like TCP. I would just use TCP but if you like streams you would do this:

    1. on crio create a writer stream called "streamaccept"
    2. on windows machine connect to streamaccept using reader endpoint <random1>:<random2>
    3. on crio, create <random3> and send it over the streamaccept stream to windows, then launch a process to handle that stream (reader <random>)
    4. on windows receive <random3> from streamaccept and create a new write endpoint <random4>:<random5> which connects to <crio>/<random3>. Close the <random1>:<random2> endpoint
    5. connection established, send data
  12. You don't need to use classes but an advantage here is that you can pass around the different data sets as different objects, rather than having to convert everything back to one big cluster with all the options. That having been said I'm assuming that you're basically transmitting configuration as string commands to the daqs so it may be that you just use an array of strings for configuration and then you have a set of UIs which interpret that string configuration just like the daq does.

    An example of a configuration as you describe can be seen here, using classes:
    http://www.ni.com/example/51881/en/
    For each device you can right click and select to add a current channel or a voltage channel or whatever and when the user clicks on that item in the tree it shows the UI associated with that class. If you have a finite number of classes and they all work using the same format of configuration strings, there is nothing to say you couldnt do something similar with plain VIs, and you just select the VI to add to the subpanel depending on the string that says "type=thermocouple" or whatever.

  13. 8 hours ago, CraigC said:

    My initial reasoning for having this hierarchy in this manner is that the child object is held in a list of other child objects or "plugin Tests".  The test is merely concerned with its data for running a test or taking a measurement etc.  Its parent method deals with meta data associated with that test, "limits", "Requirements" and other references associated with that test instance.  The top level in the hierarchy deals with other flags such as "on fail options", "abort", "repeat conditions", "test type" etc.

    Since I didn't see it mentioned elsewhere, it sounds like you might be in need of https://en.wikipedia.org/wiki/Composition_over_inheritance

    A measurement isn't a type of limit check, and a limit check isn't a type of execution behavior. It may be that you want to split this up into executionstep.lvclass which contains an instance of thingtoexecute.lvclass (your measurement) and an instance of analysistoperform.lvclass (your limit checks), or something along those lines.

  14. 1 hour ago, Nathan_MerlinIC said:

    I found my wires were switched.  Apparently Omron calls out A- B+ and my adapter uses A+ B- and I wired it using the letters instead of the +/-.

    Using Smithd's advice I went to a simple VISA chain and was able to read the hold register. Seems like the E5CC is using 2-byte mode.  My command was 0103 2000 0001 8FCA and it responded 0103 0200 A4B9 FF, I parsed out 00A4 (164 deg C).  Even though I was able to read the register I got error -1073676294  The number of bytes transferred is equal to the requested input count. More data might be available. I'm not too worried about the error at the moment since I'm so happy just to read the register.  I tried increasing the VISA read byte count from 7 to 8, but it didn't increase the return count of 7.

    I wasn't able to get Porter's API to work even though I think I have the right comm protocol.  I get error 403482 Modbus CRC/LRC error.  The command sent is 01 03 07 D0 00 01 84 87 with a response that is split in two 01 03 00 and 20 F0.  Nothing was displayed in the Registers indicator on the GUI.

    I'm going to start working on trying to write to the controller and set the temperature.  Please let me know if you have any insight into these errors.  Thanks for all your help so far.

    -1073676294 should be a warning, not an error, and is expected. Most protocols are CRLF delimited, and so if you request say 100 bytes and get a full 100 bytes rather than hitting a CRLF, it might mean you didn't read a big enough packet. Were I to develop VISA I probably wouldn't put it in every read call but hey.

    The CRC error is weird, I'd let porter answer that one if he can. Do you get a similar CRC error with the v1.1.5.39 lib?

  15. 2 hours ago, Nathan_MerlinIC said:

    Another note in regards to the Comm Spec's.  I re-read the manual and saw if I'm using modbus I need, the communications data length must be 8 bits, and the communications stop bits must be 1 bit by setting the communications parity to Even/Odd or it must be 2 bits by setting the parity to None. I'm running data length 8 bits and stop bit 1 bit with Even parity.  I still get the timeout error during the VISA Read...still tracking it down.

    I also figured out how to see the full Write Command.  I can now see how the Write Command is broken down into 01-Device, 03-Function Code, 07CF- Start Address (ex 1999), 0001- Address Qty, B541 (I still don't quite understand the CRC-16).  

    It looks like you switched your register to read between code editions. Earlier you were requesting 8193 (0x2000) and now you are requesting 0x07CF which doesn't exist according to the manual.

    The timeout just means the device never responded, which likely means your device never received the message or decided to ignore the message (for example if slave address doesnt match)

     

    At this point, I'd suggest simplifying this down a bit. Drop down a visa open, then visa configure serial, potentially a property node, then a write and a read, then a close. (similar to the example in post 1 here https://forums.ni.com/t5/Instrument-Control-GPIB-Serial/Read-data-from-IED-using-MODBUS-RS485/td-p/1979105)

    For the write, wire up a string set to hex mode set as "01 08 00 00 12 34 ED 7C"

    For the read, specify a length of 8. You should receive an exactly matching string. This is according to pdf pg 89 (manual 4-15, section 4-4-4) of the manual you attached, which specifies an echo test. 

    This way you can fiddle with the settings on the serial port until you get something back, at which point you should be safe to transfer those settings to the other two libraries and try again.

  16. thats more like it, now you can see where it writes the command (28) and then reads back up to 513 bytes in response (max size of modbus ascii).

    the thing i missed before is you selected ascii, when your device seems to use modbus RTU. If you run that code again with ASCII changed to RTU, you should see the write (28) change as well to become a binary string. Read (29) will also change size.

  17. On 4/12/2017 at 10:22 AM, Nathan_MerlinIC said:

    I'm using what I believe is an updated free NI API module https://forums.ni.com/t5/NI-Labs-Toolkits/LabVIEW-Modbus-API/ta-p/3524019  (Link starts to download the Module http://forums.ni.com/ni/attachments/ni/7324/526/6/ni_lib_modbus_library-1.1.5.39.vip). However I tried to get help from the NI forums and got a NI response that they don't support this module..thanks.  They only support the modules that cost money

    Yes, that library is mostly unsupported now.

    Thats part of the reason I tend to recommend Porter's implementation (http://sine.ni.com/nips/cds/view/p/lang/en/nid/214230) for master-only applications. He modeled the API after that one (1.1.5.39) but its totally open source (no locked diagrams) and he made some improvements to the serial handling.

  18. 19 hours ago, rolfk said:

    That's the status return value of the viRead() function and is meant as a warning "The number of bytes transferred is equal to the requested input count. More data might be available.".

    And as you can see, viRead() is called for the session COM12 and with a request for 0 bytes, so something is not quite setup right, since a read for 0 bytes is pretty much a "no operation". 

    Note: by the time I got to the end I saw the issue but I'm leaving my thoughts here in case they help

    I can't recall exactly but I believe that is still part of the init function (basically its a flush on init, get all bytes at port=0 and read that many bytes; note flush I/O buffer right afterwards which is the end of init). That is, that part of the trace makes sense (steps 1-12).

    During the read, the RTU library must poll the port, which is why you see its getting attributes over and over again (in this case, bytes at port). Then close gets called, but the library continues to poll bytes at port, which makes no sense to me. Just based on the trace, I would assume there is a race condition but in the code there doesn't appear to be one. Finally, there is no *write*, which is important given that modbus is a request-response protocol.

    The issue: It looks like in your sample code above you instantiate a new serial *slave* rather than a master. Since you want read values from another device, you want to create a serial master. That is why the trace is so bizarre, there *is* a race condition. Init (1-12) is called inline, and launches a background thread to poll the port (13-16), then your VI reads from the slave memory (local access of a data value reference) and immediately closes (step 17). I'm guessing the background thread then reopens the port (step 18) and continues polling (19-26)

    Solution should be to change over to a serial master instance.

  19. It looks like they deliberately made their terminology confusing. I cant imagine how they managed it unless they were actively trying ;)

    First, obligatory intro to modbus document if you arent familiar: http://www.ni.com/white-paper/7675/en/

    Skipping ahead to section 5 in your pdf you can see what you put in those fields. It looks like the easiest thing to check would be the 'status' field, so you'd enter either:
    0x0002 for starting address and 4 for bytes to read, or
    0x2001 for starting address and 2 for bytes to read

    If you do this and get a visa error or error 56, it means you failed to communicate with the device. Depending on the library you are using there is an error range associated with modbus errors, meaning communication was successful but the device rejected your request. The object-based libraries return the modbus error code as a labview error code with an offset (eg error 100001 = modbus error 1) with the issue being to figure out what that offset is (varies by library). Pg 4-9 has the error codes possible for your device, which in this case are just two standard modbus errors 0x02 and 0x03.

    If you get 56 or similar, be sure to check pg 1-2 for the right serial settings to use. It looks like your device might have a similar issue to what I described here with regards to # of bits.

    The default configuration of your device does not appear to match the modbus protocol, instead it seems to match their custom protocol. However to their credit they take note of this in the * below the table in section 1-1-6.

    Finally, if the NI one fails you, and you just need master functionality, try this one: http://sine.ni.com/nips/cds/view/p/lang/en/nid/214230

  20. PNG conversion takes a long, long time so you should use the imaq image to string functions to see if the png conversion is taking a while with your particular images. I found that the quality setting didnt help performance much but you can try it (pngs here are lossless so its quality of compression).

    if your drive is a spinning disk you must not write each image to its own file, you must pack the images into a single file and unpack later. Overhead for making files on a rust drive is killer, probably cut my throughput by a factor of 8. I made an in memory zip creator for this purpose but unfortunately i cannot share it. You can do much the same by creating png strings and writing them to a binary file. For an ssd this isnt an issue.

    The worker thread is same as producer consumer but with N (1/cpu here) consumers to each producer. You can do this with 'start asynchronous call', a parallel for loop, or just dropping down several copies of the same function.

  21. With RTU in particular I've found that there are a lot of devices that don't follow the spec with regard to parity, stop bits, etc. Since its RTU I'd verify the settings from your device's spec sheet and check them against the modbus setting.

    Section 2.5.1 here http://www.modbus.org/docs/Modbus_over_serial_line_V1_02.pdf 

    Bits per Byte: 1 start bit, 8 data bits, 1 bit for parity completion, 1 stop bit
    Even parity is required, other modes ( odd parity, no parity ) may also be used. In order to ensure a maximum compatibility with other products, it is recommended to support also No parity mode. The default parity mode must be even parity.
    Remark : the use of no parity requires 2 stop bits.

    Theres a decent number of devices out there which use no parity and 1 stop bit, making them not modbus devices. To resolve this with the modbus library, you must use a visa property node to re-set the stop bit after initialization and before talking to your device.

  22. There are some examples for imaq with worker threads, I'd take a look at those.

    I don't think that producer consumer will particularly help since imaq basically gives you that for free if you enable buffering in the driver. What could help is if you had N workers which would give you the ability to use all CPU cores for processing, which could allow you to write your images as PNGs (reducing image size) if your disk is the bottleneck.

    I would suggest using a disk benchmark like (http://crystalmark.info/software/CrystalDiskMark/index-e.html) to see how fast you can write to disk, then determine how big your images are and how many bits, and that will tell you if your disk is the bottleneck (or if it will be, even if it isn't right now).

    Also you mentioned buffering in ram. If this is a finite acquisition then I'd definitely look into that. Maintain a set of N imaq references and use a new reference for each incoming frame. However at 100 fps thats a lot of memory.

  23. Well this is odd..I dont think I saw this as unread but I just looked at notifications and here it was. Hrm...

    Well the long story short is that it sounds like the solution of splitting the program in two is a good fit. To quickly respond, though...

    On 3/9/2017 at 3:00 AM, ShaunR said:

    Interesting. Why didn't you use Websockets, RTSP or WebRTP?

    The advantage is that it can be displayed (in theory) directly in any browser since its just using http features. I do have a lot of websocket built out as well, so in the existing application thats how I'm transferring images. 

    On 3/9/2017 at 3:00 AM, ShaunR said:

    Well. B & C are the same thing essentially from a display point of view. I have achieved similar things to A in the past with saving to memory mapped files at high data rates which can be exploited by other VIs or even other programs. But your problem seems to be rendering, not acquisition or exploitation. What I'm not understanding at present is if an image needs operator intervention then presumably they can only operate on one image at a time and 30 line profiles or histograms aren't that intensive (why did NI drop array of charts?).

    So how big are these image files?

    For display, yes, its the same (I'm just using the ROI and overlay features of imaq). Whats more difficult is all of the low-level UI stuff related to, for example, drawing a box around a feature and having it show up. Its not hard so much as bug-prone and time consuming to develop from scratch.

    What I was trying to say is that the way those features are used is totally different. The histogram, drawings, etc are all a part of the offline-mode of operations so I can easily use something (like labview) that has slow rendering capability, so long as I can make a faster-rendering application that does the simple case (A+B, with no user interaction except to resize windows). The images themselves are 3 MB raw or smaller, theres just a ton of them. This is another example of how the use cases are different. Without going into detail the best way to convey the difference is to imagine a large piece of machinery -- when the system is being tweaked, people only want to see their small part. But when the system is operational the entire system must be monitored simultaneously but with a lower level of detail. Thats why I think this split-program will work. Using something with GPU rendering for the high-throughput mode and using labview as a development shortcut for the low-throughput mode.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.