Jump to content
News about the LabVIEW Wiki! Read more... ×

smithd

Members
  • Content Count

    664
  • Joined

  • Last visited

  • Days Won

    40

Everything posted by smithd

  1. 2013 is my go-to for stable+old. 2010+11 had the new compiler, 12...I dunno what happened with 12, but it sure seemed to be unstable. Its probably a lie that my brain made up, but it always seems like the even years are much worse than the odd years, with the exception being that 2018 seems pretty solid. 2013 has the new compiler (2010) with the performance issues resolved (2012), auto-concatenating terminals (2012), somewhat improved web service scheme, get class instance by name, dotnet CLR got bumped to 4.0 (I know this may be a negative to you ). The only big issue I know of is the attached code comment arrow thing, which results in an instacrash at a later date if you duplicate any multi-frame structure that has one inside. You miss out on custom right click menus (2015), VIMs and read-only DVRs (2017), and the CLI, python node, and type disable structure (2018) -- of these only the custom right click menu is probably helpful.
  2. smithd

    Playback file on cRIO

    Not sure if this was a typo, but RT-FIFOs are between loops on the RT side. You would use a Host-to-Target FPGA FIFO to pass data down to the FPGA for output. These fifo types do not support clusters, so you'd have to develop your own scheme for passing data down. This may be as simple as sending values in chunks of N, but you do have to watch out for overflows. If you are sending data from RT to FPGA there is a likelihood of underflow, depending on the output rate. You would probably wish to load all your file data into the fifo first and then tell the FPGA to start reading. This all depends of course on how much data you have. If you have 10 minutes of data at a rate of 1 Hz, this is overkill but would still work. If you have 10 minutes of data at 100 kHz, then you likely won't have enough memory, so you'll want to preload the FIFO and then feed data as fast as you can. You'll also want to keep in mind the signals side of this. I'm not great in this area, but I would imagine that you are attempting to replicate a continuous signal, you'll want to output as fast as possible to reduce frequency spikes as the output changes.
  3. smithd

    Rebar/Rust and G Community

    I'm definitely not expecting anything he said to be set in stone, I was just curious about the concept. And to reinforce one of the bigger use cases that comes to mind for me That sounds like a pretty cool concept. I don't really have any more specific thoughts, it just sounds like a nice way forward. I do have one more somewhat related...I guess question, that comes to mind. So I think for me the exciting part is all of the reference lifetime-related stuff, but your first bullet point is improved memory management for large data sets. Most large data sets are such because they are for analysis, and most of the analysis functions (both for images as well as for waveforms) are C DLLs. My question, if you want to call it that, is does that confound things at all? For example I know some types passed to the CLFN require full data copies anyway (ie the substring, per that buffer allocations thread I linked to earlier).
  4. smithd

    Rebar/Rust and G Community

    Fair enough on all the above -- I think the answer is that we are agreeing for the most part, I'm just being pushy 🧐 . To follow up on the above and on AQ's point, my last comment was more or less attempting to reach this question, but didn't quite get there: How does this work together with the existing system? I think most of my questions and comments above were basically revolving around how these concepts apply within the world of a normal VI. To try to illustrate what I'm getting at, I'll go back to my vision application above. If I understand you, it seems like the way it is now, rebar references may exist only within rebar diagrams. But if I have my 5 vision-related loops described above, my logger will definitely be using standard file refnums, acquisition will be using imaq refnums, networking will have tcp refnums, so presumably all these top level loops will be standard, non-rebar VIs. But I want to share my data between them safely and without copies. So I'm questioning what the approach is for melding the two types of code. Is the idea to make more of the refnum types into rebar types, so there is a tcp rebar and a imaq rebar and a file rebar (this is what I was trying to get at with the "baggage-free VI" comment)? Or would the reverse happen? Or am I totally misunderstanding?
  5. smithd

    Rebar/Rust and G Community

    Most of your responses made sense so I'm just going to pick out the few bits that made less: The local variable lifetime is kind of my point -- its life isn't really limited because the data space for a reentrant VI remains live for the duration of an application. Similarly, I was under the impression the shift register value is also permanent. As an example, I seem to recall a while back running a loop that allocates a giant (>>1mb) array on a shift register but never wire it out of the VI. if you do that, the large array remains part of the VI's data usage. From the perspective of the details you mentioned, yes, that giant array is on the heap somewhere, but all that data is still live until something overwrites it, making that data location permanent as well, even if initialized. Good explanation, but for the refnum point I keep coming back around to -- yes, this is something I absolutely would love to see improved and resolved, but it seems like a bandaid over an underlying flaw in how labview does dataflow. Now thats something I can definitely get behind. If NXG can have "gvi"s or "good, baggage-free VIs without depending on the design decisions from 3 decades ago that might not have aged as well" I might switch sooner But in all seriousness, it seems like you've made your own type of VI with compiler rules associated with it. Is there a way to expand that more broadly, making life simpler enough for the compiler (per your points above) that it can figure out the lifetime of objects? IE in order to reach the goal of stateless functions, but without relying on the user to manage lifetimes?
  6. smithd

    Rebar/Rust and G Community

    Definitely Kudod, but Mercer's responses sound like this is never gonna happen. The abort button thing in particular seems to also apply to rebar, although it's not clear if the nxg abort has the same semantics. I like your point on references, but the counter is also good to bring up -- since everything is top level, reliable sharing is impossible between two things with different lifetimes unless one is always long lived and defined as the owner. This is often fine, but it can bite people who are unaware.
  7. smithd

    Rebar/Rust and G Community

    It seems interesting, and very cool that NXG lets you do stuff like this, but I do have some questions and comments. All of the below is without having installed it. I tried, but it requires NI package manager 19. NIPM 19 is not released as far as I can tell (I downloaded the latest, 18.5.1, from ni.com and when it loads it doesn't seem able to reach the server). More to the point, why does NIPM continuously have this issue where it requires a specific version to unpack the files? I honestly don't think I've ever successfully installed a package with NIPM on the first go <_< To start, I think you need to get more technical on the overview. For example, you say things like local variables and shift registers are "places" and that "places" have lifetimes...what is the lifetime of a local variable or a shift register? The explanation of how in rebar the sets of wires correspond to a single 'place' which...sounds just like a standard labview buffer allocation. My point is that you're making something for the type of person who might say "oh neat a rust+labview mashup!!" but then have a 1000ft explanation of how it works 👻 This is just a nitpick, but labview already makes data copies explicit. What it doesn't make explicit is optimizations that remove the need for copies . Also, they aren't copy dots, they are buffer allocations... Its obvious that LabVIEW knows what to do with a visa refnum (and file refnums, and any other refnum I can think of) when the application closes...which I think is a positive for LabVIEW, so I'm not sure how often its actually valuable to define cleanup (especially given it sounds like the cleanup is off diagram). That having been said, I can think of one super exciting use for this which is hardware access via a dll. If this tool lets you say "this pointer sized int is a pointer and when the program exits call this function" thats pretty awesome. And much better than the unreserve callback nonsense in the CLFN. I'm kind of confused by this and it probably comes from my comments at the top about not really getting whats going on under the hood. To me the problem with the existing system is that the refnums point at global things. For example, if I allocate a queue inside of A.vi, and I have no terminal which outputs the value of the queue refnum, if A.vi finished executing the compiler SHOULD kill the queue.....except that queues are global. I'd much much much much rather have the compiler figure out the lifetime of the queue for me and kill it when the wire can't go anywhere else. It seems clear that this addon allows for that, which is awesome, but why do we need an addon? I thought the whole point of dataflow was that the compiler knew about the lifetime of data and could make intelligent decisions based on that? I don't know enough about iterators and generators, but why do you think anonymous functions are aided by this? To be honest its not clear to me why anonymous functions aren't possible right now, nor closures. I think a good summary of my comments above is basically this: Hey, thats pretty neat -- why isn't this already part of the language? I wanted to finish up with an application-specific question. One of the big use cases for avoiding copies is image processing, but unfortunately imaq images are not only global, but they are not reference counted like named queues (ie worst of both worlds). So if, for example, you are an image acquisition and you want to share your latest snap with all your closest friends (the logging loop, the network loop, the processing loop, the display loop) its difficult to share the references in such a way that when all the other loops are done, the image is freed. If it were a queue, you could obtain several named references to the same queue and the queue only goes away when all the references are dead. I ended up implementing this myself with a DVR, but it still requires that users call the 'release' function -- ideally, and what I think rebar brings to the table, is an automatic release. Does this rebar system solve that? If so, sweet. You should share it with the imaq group
  8. smithd

    [LVTN] Messenger Library

    I'm guessing shaun would say this one: https://lvs-tools.co.uk/software/websocket-api-labview-library/ But the one on vipm seems good enough to me. A lot of the others are only 1 half or the other.
  9. smithd

    [LVTN] Messenger Library

    Yeah websockets are easy, theres 12 libraries out there for setting them up using the tcp primitives. 50 ms delay could be nagle -- if its a closed network, 50 ms is at least 25 round trips, likely more. One possibility for the long long delay is if you're using hostname rather than numeric IP address. The hostname lookup is a disaster for any networking code, at least on windows.
  10. smithd

    [LVTN] Messenger Library

    When you say cdaq, you mean normal cdaq or one of windows/rt cdaqs? Assuming windows/rt, maybe if you want some basic messages just set up a web service with a named queue to forward messages to. Then you don't need an actor, just call the http api directly (or spawn it as a task).
  11. smithd

    [LVTN] Messenger Library

    DAQmx might not work right unless you increase the thread count, depending on how many of those actors have blocking functions
  12. Its worth mentioning that LabVIEW should do this for you, its called loop invariant code motion, so don't worry about it for performance reasons.You should put the code wherever its more readable. In this case, I'd argue the most readable form is leaving the math where it is, but moving the source values (array size, iteration terminal, control) closer in. I'd also say use a local variable for the control -- the downside to this is that, in fact, using a local variable will stop the loop invariant code motion from happening 😕 In this particular case, I'd like to mention that you can make code more readable through various means -- one is code movement, another is labelling wires or using self-labelled elements like clusters. You can also add comments, but before you go adding comments I would always always look at the code and see if it can be changed to use a different tool to make the meaning more self-evident. To stick with the current section of code: personally, I find pretty much all iteration-based math to be incomprehensible. I sat here for like a full minute trying to figure out why on earth you were taking max(0, i-10) ...but stepping back for a second, it looks like it should probably just be a case structure with two cases, "..10" and "11..". There is always a case structure in this code, but in your version its hidden inside the max() function and if you use a case structure its explicit. Then you have the problem of "why 10?", but at least anyone reading the code is not stuck back at "why maximum?". Along these lines, one thing that might really help you is if you find yourself a buddy and try to explain the code to them (bit by bit, i mean, definitely not all at once). As logman said the application isn't really all that complicated, and I would bet that anyone with a semi-softwarish or logical background could follow along if you walked them through it. The purpose would be to rethink code like the above. If you're sitting there explaining to another human being "well I want the index of this listbox to be 0 up until iteration 10 and then I want it to start incrementing by 1, so what I did is I took the maximum of 0 and the iteration terminal and then built that into an array with 0 and used that as the index" you might start thinking to yourself "man I really need to make this easier to follow".
  13. Ah yeah I read a note that said a specific visual studio version was no longer supported, should have scrolled down further.
  14. Yeah I looked again, I don't think matlab can use visual studio. Nah this is a parsing problem. Maybe I'm misreading the error but it looks like it doesn't know what __cdecl is -- __cdecl is visual studio specific. Since I think most compilers use that calling convention by default, you can safely remove all instances of __cdecl from stim.h
  15. I have two answers which might help: Answer A: Purchase Teststand and learn it. Without specific details I cant be sure, but it looks like the sort of application teststand is built for. If someone gives you crap about the cost, I'd argue that for sequential things like you've shown here teststand is a lot more maintainable (the bus factor for the code above is likely 1, and potentially 0 after a few months away from it). I'd also add that it has a lot of standardized reporting stuff built in...and if you're calibrating stuff, this would seem to be critical. So seriously, at least let someone from NI spend an hour giving you a demo. Answer B: Looks to me like your first step might be to take some of those 50 front panel controls and put them into subVIs that are set to run as dialogs (in VI options, show front panel when called and close front panel when done). Those dialogs can take some of the logic thats in your main loop and organize it a bit -- "this event case is associated with this user input", and so on. Those dialogs can return small clusters with the configuration (Monitor is a cluster of model[str] and serial[str] and resistance[u16]). If at the end of this you still have a ton of wires, as hooovahh says you can turn that into a clustersaurus/BFC, but its still better organized (clustersaurus->monitor->model vs just having a control called "monitor model" sitting out there). Once you have some of what I'm going to call the "non-business logic" (eg the UI logic) out of the way, I think a state machine is a reasonable migration point to start with. I would add a caveat to this, which is you should also learn about different communication tools within labview -- in this specific case, queues and user events (a type of queue which works with the event structure) or "channel wires" which are intended as a simpler wrapper around the queue/event concept). I say this because it looks like there are several long-running tasks without user interaction, so creating parallel loops to run different tasks seems like the next logical step. In general you would use a queue to send data (eg "start running standard cal") from the UI loop to the tasks loop, and use a user event to send information (eg "I'm done" or "an error occurred") from the task loop back to the UI loop. drjdp has a video on some of the considerations for this here although it may be too much for you now -- hes coming at it from the other end "I've been using this pattern for a while and heres where bad stuff happens". Once you've mastered this version, and if you feel like its still complex, the next step would be to dive further into frameworks (things like delacor's QMH or drjdp's messenger library or ni's actor framework which are arguably in order of increasing abstract-ness) -- in these frameworks more stuff happens asynchronously from one another which can make the code more modular (the "standard cal process" is always over in this one library while the user dialog input is always over in this other library) but theres obviously a big learning curve and frameworks tend to require you fit into them rather than the reverse.
  16. smithd

    clfn scripting

    just curious if anyone has ever made an attempt to improve scripting out of dlls. I know theres the built in import wizard, but I've literally never seen that work (has anyone here?). The reason i'm asking is that I was thinking about it and came across this tool: https://github.com/CastXML/CastXML (binaries from the build link at the bottom here: https://github.com/CastXML/CastXMLSuperbuild) Essentially it takes a C header (or any C file I guess) and processes it with LLVM to produce an XML tree of all the functions and types. For example it takes the header here: https://github.com/nanomsg/nng/blob/master/include/nng/nng.h and produces xml which includes stuff like this: <Typedef id="_8" name="ptrdiff_t" type="_285" context="_1" location="f1:51" file="f1" line="51"/> <Typedef id="_9" name="wchar_t" type="_286" context="_1" location="f1:90" file="f1" line="90"/> <Typedef id="_14" name="intptr_t" type="_285" context="_1" location="f4:182" file="f4" line="182"/> <Union id="_78" name="nng_sockaddr" context="_1" location="f6:151" file="f6" line="151" members="_328 _329 _330 _331 _332 _333" size="1088" align="64"/> <Struct id="_61" name="nng_aio" context="_1" location="f6:96" file="f6" line="96" incomplete="1"/> <Struct id="_68" name="nng_sockaddr_in6" context="_1" location="f6:124" file="f6" line="124" members="_315 _316 _317" size="160" align="16"/> <Field id="_315" name="sa_family" type="_25" context="_68" access="public" location="f6:125" file="f6" line="125" offset="0"/> <Field id="_316" name="sa_port" type="_25" context="_68" access="public" location="f6:126" file="f6" line="126" offset="16"/> <Field id="_317" name="sa_addr" type="_385" context="_68" access="public" location="f6:127" file="f6" line="127" offset="32"/> <Typedef id="_79" name="nng_sockaddr" type="_334" context="_1" location="f6:158" file="f6" line="158"/> <Enumeration id="_80" name="nng_sockaddr_family" context="_1" location="f6:160" file="f6" line="160" size="32" align="32"> <EnumValue name="NNG_AF_UNSPEC" init="0"/> <EnumValue name="NNG_AF_INPROC" init="1"/> <EnumValue name="NNG_AF_IPC" init="2"/> <EnumValue name="NNG_AF_INET" init="3"/> <EnumValue name="NNG_AF_INET6" init="4"/> <EnumValue name="NNG_AF_ZT" init="5"/> </Enumeration> <Function id="_84" name="nng_close" returns="_293" context="_1" location="f6:194" file="f6" line="194" mangled="?nng_close@@9" attributes="dllimport"> <Argument type="_55" location="f6:194" file="f6" line="194"/> </Function> <Function id="_87" name="nng_setopt" returns="_293" context="_1" location="f6:205" file="f6" line="205" mangled="?nng_setopt@@9" attributes="dllimport"> <Argument type="_55" location="f6:205" file="f6" line="205"/> <Argument type="_338" location="f6:205" file="f6" line="205"/> <Argument type="_339" location="f6:205" file="f6" line="205"/> <Argument type="_5" location="f6:205" file="f6" line="205"/> </Function> <FundamentalType id="_293" name="int" size="32" align="32"/> <FundamentalType id="_294" name="unsigned char" size="8" align="8"/> <FundamentalType id="_295" name="unsigned int" size="32" align="32"/> <PointerType id="_353" type="_352" size="64" align="64"/> <PointerType id="_354" type="_62" size="64" align="64"/> <PointerType id="_355" type="_47" size="64" align="64"/> Its pretty easy to parse because everything has an ID -- I've got some truly gross code for converting all the struct and enum definitions into labview which is simple enough. I guess I'm just testing the waters to see if anyone else thinks this is useful. My usual strategy right now is to minimize the number of C calls I make, but if I could just magically import a header and get the library made for me, it would be pretty cool. But on the other hand, theres so many challenges associated with that which I hadn't thought about going in, like handling structs by value, or what to do with numeric return values (is it an error code? is 0 good or bad?). The particular library I selected as a test case above has two features that are super annoying -- its main 'session' reference is not a pointer like in most C apis, but a struct containing a single int, and it also has a union type..for some reason. So even making this library work, which I thought would be an easy way to try it out, has ballooned in complexity a bit. Thoughts?
  17. smithd

    clfn scripting

    Fair enough, its definitely been a while. Last I remember I had to have the whole visual C sdk on my machine because it wanted to find some standard header. To me this is one of the bigger annoyances -- I dont' want to fix up the h file, that seems iffy to me. I suppose it generally doesn't matter since its just use to help the script along, but... Usually strings and arrays are OK, but yeah big structs can be iffy due to offsets -- that is once nice thing about this LLVM-based tool, is it tells you all the byte offsets so technically one could pass in an appropriately sized buffer and restructure the data...but thats a pain as well.
  18. My understanding from what came up is that Matlab is attempting to build a wrapper dll with some more matlab friendly interface. A quick google didn't find me anything about how it does this, but that seems to be what its doing. I say that because the "we dont know the..." come from the labview headers. So it seems like Matlab is running a compiler against stim.h which includes the platdefines header, and since I'm assuming the matlab compiler is not one of the ones NI was expecting when they built this (or if matlab simply doesn't #define the same definitions NI is expecting), the compiler correctly spits out the errors shown, as in this snippet of platdefines: #ifdef _M_PPC #define ProcessorType kPPC #elif defined(_M_IX86) #define ProcessorType kX86 #elif defined(_M_X64) #define ProcessorType kX64 #elif defined(_M_ALPHA) #define ProcessorType kDECAlpha #elif Compiler == kBorlandC #define ProcessorType kX86 #elif defined(_ARM_) #define ProcessorType kARM #else #error "We don't know the ProcessorType architecture" #endif So the question becomes what should it be. My guess is that they should be defined per the labview exe/dll which is why I suggested the MSVC compiler definitions (as I understand it, this is what NI uses for windows builds). However it may be that the right answer is to edit the headers to set something more appropriate. For example, in the section above, you could comment every line out except for "#define ProcessorType kX64". Similarly you could comment out the compiler section and replace it with "#define Compiler kGCC" (if that is what matlab uses, which I assume it is). The only reason these need to be defined is so that extcode.h picks up all the appropriate definitions and headers for the platform. For example. stim.h uses the type "uint32_t". This is defined in stdint.h, but if your compiler isn't including it already then stim.h can't be compiled. So, in fundtypes.h, it has a bunch of platform checks to see if it needs to define those types. The easier route, vs trying to make extcode.h have the proper types, would be to just edit stim.h directly to include the type definitions you need. I would suggest editing your stim.h to look like this (lines 1-7): //#include "extcode.h" //this dll only uses two uncertainly defined types, but they are defined by stdint.h. This section is borrowed from fundtypes.h #if !defined(_STDINT_H_) && !defined(_STDINT_H) && !defined(_STDINT) #include <stdint.h> //alternatively comment the line above and uncomment these lines: //typedef int int32_t; //typedef unsigned int uint32_t; #endif #ifdef __cplusplus extern "C" { #endif .....
  19. If I had to guess, you'll want to edit STIM.h to include (at the very top, before anything else): #define _Win64 1 //if 64 bit #define _Win32 1 //any windows #define _M_AMD64 100 #define _M_X64 100 // same as above #define __x86_64__ // and again, this might be the gcc version or something? #define _MSC_VER 1900 // just a guess, this is ms visual c 2015 The full list of visual c #defines is here at this link, and you can also just go through the NI header file and look for stuff that makes sense. https://docs.microsoft.com/en-us/cpp/preprocessor/predefined-macros?view=vs-2017
  20. smithd

    LabVIEW 2017 Editing Issues

    Also, this is kind of a side point but the concept is the same. Has anyone noticed that reentrant VIs get super slow sometimes when debugging? Its not always, but I can't figure out what conditions might be causing it. Its like you're going along debugging some code, you step into a reentrant VI, and everything just stops, and it takes like 20-30 seconds for the window to materialize. I know its not just one computer, sadly.
  21. smithd

    LabVIEW 2017 Editing Issues

    This may have already been mentioned, but have you tried changing the compiler complexity threshold? http://zone.ni.com/reference/en-XX/help/371361P-01/lvdialog/miscellaneous_options/#compiler Its possible its trying to compile while you drag or something, and maybe the drag selector interacts with the UI thread more or something.
  22. smithd

    Dynamic dispatch for refnum of class

    There should not be, adapting to type is all you want and thats what vims do. As pointed out, queues are not a runtime-dispatchable type in the same way you cannot dynamic dispatch off a boolean or an array. The contents are irrelevant. Its worth noting that vims support dispatching based on method name rather than class hierarchy. There are examples of using vims to make lvclasses act kind of like interfaces (ie if you have two separate hierarchies with a method "Get queue" which returns a thing at connector pane position 3, you can put that in a vim and it will adapt, including if that outputted thing has a different type). As a side note, you can and should wrap the queue status and index primitives inside of a disable structure.
  23. smithd

    plugin HAL

    I'm curious how this would work in more detail if you could share. It sounds interesting, but it also sounds like...a lot. For example I'm not sure where kibana fits in, and I'm curious what mongo gets you that you couldn't get with a more common database like postgres. This kind of reminded me of what you are describing. It wasn't really applicable for me but it came to mind when I read your post. It looks like their "driver" can (in at least this case) just be an ini file. Since it sounds like you're open to python, this have always been on my "neat, but I don't have a use now" list: https://airflow.apache.org/ Its a task graph and you can use something like celery or zeromq or amqp to move data to those tasks. Also, on the topic of kibana+databases, this tool is way cool: https://github.com/apache/incubator-superset its basically a graphical sql editor in python/html which talks to any sql db python can talk to, and with a bunch of cool visualizations. I've not used it 'in production' but its similar to (and 1000x better) than something we had built in house and I demo'd it to some folks and they liked it almost as much as I did. I know its not related to this topic at all but I like it.
  24. smithd

    Trying to play the video in Reverse

    mjpeg doesnt work like that, its literally a bunch of jpegs in a wrapper. Random access should work as well as forward access. I'd bet forward access has some caching and pre-fetching though, so for random access check your cpu load, memory, and disk usage while trying to play the video. I'd bet you're hitting some limit. Specifically you are asking a lot of windows to smoothly unpack a randomly accessed frame every 34 ms using software decoding.
×

Important Information

By using this site, you agree to our Terms of Use.