Jump to content

smithd

Members
  • Content Count

    763
  • Joined

  • Last visited

  • Days Won

    42

Posts posted by smithd

  1. For fun I thought I'd make a list of the reasons I can remember why people choose sometimes choose UDP over TCP.

    • Connection overhead of TCP (initiating a connection)
      • Mainly a big deal with web browsers (each page has to connect to several domains, and each takes a few (usually 2 I believe) TCP connections, which introduces latency)
        • This is part of why HTTP/3 exists
      • Not a big deal for a 2 hour test where you open one connection
    • Don't need packet de-duplication or re-transmits
      • video streaming
      • or there is an application-specific usage pattern that makes application-layer handling of faults the better route (HTTP/3)
      • This application needs reliable transmission as it does not implement reliability at a higher level
    • Want to avoid ordered transmission/head-of-line blocking
      • This really means you are implementing multiplexing at the application level rather than at the TCP level -- its a hell of a lot easier to open 10 TCP connections, especially in applications on closed networks which are not "web scale"
        • This is the reason HTTP/2 exists. HTTP/2 has connection multiplexing on TCP, HTTP/3 has connection multiplexing over UDP.
      • Given the reliable transmission and rate requirement, I'm assuming ordered transmission is desired
    • Want to avoid congestion control
      • Bad actor attempting to cause network failures
      • or: self-limited bandwidth use
        • This application falls under this category
      • or: Implement congestion control at the application layer (HTTP/3)
    • Memory/CPU usage of tcp implementation
      • Erm...labview
    • Network engineers want to heavily fiddle with parameters and algorithms without waiting for the OS kernel to update
      • HTTP/3 is supposed to be faster because of this -- TCP is tuned for 20 years ago or so its been said, and HTTP/3 can be tuned for modern networks
      • I'm assuming this is not Michael

     

    On a closed network, for this application, its hard to see a benefit to UDP. (It occurs to me Michael never said it was a closed network, but if he put a pharlap system on the internet...ūüėĶ)

    • Like 1
  2.  

    I'm using LabVIEW to do all this. So not sure how to do RUDP. I'm using UDP because of the low overhead. But maybe I can still get the throughput with other methods. I will have to experiment and see. I've just never considered that this issue would come up at the slow rate I'm using.

    On a closed network where lost packets are unlikely, TCP with nagle turned off should have minimal overhead/latency vs udp.

     

    Looking at this post by an NI employee. It seems that the ARP table cannot be statically defined on Phar Lap.

    Is there any way to send the ARP manually -- IE call into the winsock api (PharLap :( ) and force an ARP every 5 minutes to refresh the table?

  3. 2013 is my go-to for stable+old. 2010+11 had the new compiler, 12...I dunno what happened with 12, but it sure seemed to be unstable. Its probably a lie that my brain made up, but it always seems like the even years are much worse than the odd years, with the exception being that 2018 seems pretty solid.

    2013 has the new compiler (2010) with the performance issues resolved (2012), auto-concatenating terminals (2012), somewhat improved web service scheme, get class instance by name, dotnet CLR got bumped to 4.0 (I know this may be a negative to you ;) ). The only big issue I know of is the attached code comment arrow thing, which results in an instacrash at a later date if you duplicate any multi-frame structure that has one inside.

    You miss out on custom right click menus (2015), VIMs and read-only DVRs (2017), and the CLI, python node, and type disable structure (2018) -- of these only the custom right click menu is probably helpful.

  4. 5 hours ago, Ratataplam said:

    3) read the array, row-by-row, with an auto-indexed tunel loop

    4) Inside the loop send the data, organized into a cluster, to cRIO FPGA via RT-FIFO
    5) On FPGA keep in listen on the RT-read and get the message ÔĽŅwÔĽŅhen ready

    Not sure if this was a typo, but RT-FIFOs are between loops on the RT side. You would use a Host-to-Target FPGA FIFO to pass data down to the FPGA for output. These fifo types do not support clusters, so you'd have to develop your own scheme for passing data down. This may be as simple as sending values in chunks of N, but you do have to watch out for overflows.

    If you are sending data from RT to FPGA there is a likelihood of underflow, depending on the output rate. You would probably wish to load all your file data into the fifo first and then tell the FPGA to start reading. This all depends of course on how much data you have. If you have 10 minutes of data at a rate of 1 Hz, this is overkill but would still work. If you have 10 minutes of data at 100 kHz, then you likely won't have enough memory, so you'll want to preload the FIFO and then feed data as fast as you can.

     

    You'll also want to keep in mind the signals side of this. I'm not great in this area, but I would imagine that you are attempting to replicate a continuous signal, you'll want to output as fast as possible to reduce frequency spikes as the output changes.

  5. 1 hour ago, Aristos Queue said:

    I encourage Ben not to worry too much about the interoperability with G and focus on getting the new language stabilized within itself...

    I'm definitely not expecting anything he said to be set in stone, I was just curious about the concept. And to reinforce one of the bigger use cases that comes to mind for me:)

    1 hour ago, Ben Leedom said:

    For any types that Rebar cannot pass raw to VI, it must wrap them in refnums. This amounts to having a reference-counted shared object between Rebar and VI, so there are still some Rebar types that wouldn't qualify, but it should be enough to allow the most interesting Rebar-created values to VI and have the runtime maintain their invariants. In this way, you could define a TCP connection type, a file handle type, an IMAQ image type, or whatever you want in Rebar, and provide an API for it back to VI with refnums. This would have the nice result of allowing you to re-implement many parts of the LabVIEW runtime in Rebar.

    That sounds like a pretty cool concept. I don't really have any more specific thoughts, it just sounds like a nice way forward.

     

    I do have one more somewhat related...I guess question, that comes to mind. So I think for me the exciting part is all of the reference lifetime-related stuff, but your first bullet point is improved memory management for large data sets. Most large data sets are such because they are for analysis, and most of the analysis functions (both for images as well as for waveforms) are C DLLs. My question, if you want to call it that, is does that confound things at all? For example I know some types passed to the CLFN require full data copies anyway (ie the substring, per that buffer allocations thread I linked to earlier).

  6. 4 hours ago, Aristos Queue said:

    I agree.

    Ben works on making a language safe for references. I work on making a languÔĽŅage without references. Our goals are the same: a world where data can be trusteÔĽŅd.ÔĽŅ

     

    7 hours ago, Ben Leedom said:

    I'm not sure if we're agreeing or disagreeing here. My claim is that the way LabVIEW does dataflow--meaning wires carry immutable values that can be used in any number of concurrent places downstream--means that refnums are probably the best way of referencing objects that is available to LabVIEW. ....

    ..I think to make life simpler for the compiler, you have to change the language, like I said above--maybe that answers it? Also, rather than "relying on the user to manage lifetimes," I would state it as "providing a set of safe and complete rules for lifetimes and guiding the user into writing code that follows the rules."

    Fair enough on all the above -- I think the answer is that we are agreeing for the most part, I'm just being pushy ūüßź . To follow up on the above and on AQ's point, my last comment was more or less attempting to reach this question, but didn't quite get there:

    How does this work together with the existing system?

    I think most of my questions and comments above were basically revolving around how these concepts apply within the world of a normal VI. To try to illustrate what I'm getting at, I'll go back to my vision application above. If I understand you, it seems like the way it is now, rebar references may exist only within rebar diagrams. But if I have my 5 vision-related loops described above, my logger will definitely be using standard file refnums, acquisition will be using imaq refnums, networking will have tcp refnums, so presumably all these top level loops will be standard, non-rebar VIs. But I want to share my data between them safely and without copies. So I'm questioning what the approach is for melding the two types of code. Is the idea to make more of the refnum types into rebar types, so there is a tcp rebar and a imaq rebar and a file rebar (this is what I was trying to get at with the "baggage-free VI" comment)? Or would the reverse happen? Or am I totally misunderstanding?

  7. Most of your responses made sense so I'm just going to pick out the few bits that made less:

    9 hours ago, Ben Leedom said:

    I had a draft of the overview that listed the specific lifetimes of places, but then removed it. A local variable's lifetime is bounded by that of its containing VI's clone, because it is stored in the VI's dataspace--that is, it persists not just for the entirety of calls to the VI but also across calls. A shift register's lifetime is bounded by the loop where it is defined, unless it is uninitialized, in which case its lifetime is like that of a local variable's--it persists across calls.

    The local variable lifetime is kind of my point -- its life isn't really limited because the data space for a reentrant VI remains live for the duration of an application. Similarly, I was under the impression the shift register value is also permanent. As an example, I seem to recall a while back running a loop that allocates a giant (>>1mb) array on a shift register but never wire it out of the VI. if you do that, the large array remains part of the VI's data usage. From the perspective of the details you mentioned, yes, that giant array is on the heap somewhere, but all that data is still live until something overwrites it, making that data location permanent as well, even if initialized.

    9 hours ago, Ben Leedom said:

    The compiler does try to determine the lifetime and usage of data and make decisions about when to copy it based on that. There are several factors that confound the compiler's analysis and force it to be conservative:

    • The compiler tracks signature information for eÔĽŅach VI about how its outputs and inputs are in-place to each other. Dynamic dispatch calls represent calling one of many possible signatures, not all of which may be known at compile time (in the case of dynamic loading), so those calls generally have to be conservative with copying data.
    • Calling a VI by reference asynchronously also requires being conservative, because you can no longer directly tie when a call begins to when it ends.
    • Refnums are particularly bad for determining lifetime, because you can copy the refnums themselves¬†anywhere--uninitialized shift registers, global variables, queues, DVRs, fields of derived classes stored in base class wires, variants--such that it is impossible to determine statically how many different places might all point to the same object. The runtime does not increment a reference count every time it copies a refnum, so it has to be extremely conservative about when an object can be freed--it's pretty much either, "the user just called Close Reference on this" or "the program's going away now." It's not just that some refnum types let you globally access instances by name, it's more that refnums are all the concurrency-unsafe baggage of pointers without the advantage of referring to a literal memory address.

     

    Good explanation, but for the refnum point I keep coming back around to -- yes, this is something I absolutely would love to see improved and resolved, but it seems like a bandaid over an underlying flaw in how labview does dataflow.

    9 hours ago, Ben Leedom said:

    One of the goals of Rebar is to provide a basic function document that retains absolutely no state across calls, and thus can completely dispense with the notion of reentrancy as a property of a function. That in turn simplifies the notion of defining an anonymous function.

    Now thats something I can definitely get behind. If NXG can have "gvi"s or "good, baggage-free VIs without depending on the design decisions from 3 decades ago that might not have aged as well" I might switch sooner :)
    But in all seriousness, it seems like you've made your own type of VI with compiler rules associated with it. Is there a way to expand that more broadly, making life simpler enough for the compiler (per your points above) that it can figure out the lifetime of objects? IE in order to reach the goal of stateless functions, but without relying on the user to manage lifetimes?

     

     

     

     

  8. 2 hours ago, drjdpowell said:

    You should kudo this then: Means to register a DVR-cleanup callback for use when DVR released when VI goes idle.  This would allow DVRs to wrap all dll pointers, with proper cleanup regardless of how the VI stops.

    Definitely Kudod, but Mercer's responses sound like this is never gonna happen. The abort button thing in particular seems to also apply to rebar, although it's not clear if the nxg abort has the same semantics.

     

    I like your point on references, but the counter is also good to bring up -- since everything is top level, reliable sharing is impossible between two things with different lifetimes unless one is always long lived and defined as the owner. This is often fine, but it can bite people who are unaware.

  9. It seems interesting, and very cool that NXG lets you do stuff like this, but I do have some questions and comments.

    All of the below is without having installed it. I tried, but it requires NI package manager 19. NIPM 19 is not released as far as I can tell (I downloaded the latest, 18.5.1, from ni.com and when it loads it doesn't seem able to reach the server). More to the point, why does NIPM continuously have this issue where it requires a specific version to unpack the files? I honestly don't think I've ever successfully installed a package with NIPM on the first go <_<

     

    To start, I think you need to get more technical on the overview. For example, you say things like local variables and shift registers are "places" and that "places" have lifetimes...what is the lifetime of a local variable or a shift register? The explanation of how in rebar the sets of wires correspond to a single 'place' which...sounds just like a standard labview buffer allocation. My point is that you're making something for the type of person who might say "oh neat a rust+labview mashup!!" but then have a 1000ft explanation of how it works ūüĎĽ

    Quote

    Memory performance is easier to control: The language makes potentially expensive data copies explicit, and allows values to be re-used and modified in-place without playing "hide the copy dots" or using special syntax like the In Place Element Structure.

    This is just a nitpick, but labview already makes data copies explicit. What it doesn't make explicit is optimizations that remove the need for copies :P.

    Also, they aren't copy dots, they are buffer allocations...

    Quote

    Values that need safe resource cleanup can be expressed graphically: Because all values have a definite lifetime defined by their owner, any value type can define what happens when it needs to be cleaned up (similar to a destructor in C++, a finalizer in C#, or the Drop trait in Rust). This means that values that represent system or hardware resources or other shared objects--for example, file handles, network connections, device sessions, or queues--that would be implemented by the runtime or by non-graphical plugins in G can be implemented graphically in Rebar.

    Its obvious that LabVIEW knows what to do with a visa refnum (and file refnums, and any other refnum I can think of) when the application closes...which I think is a positive for LabVIEW, so I'm not sure how often its actually valuable to define cleanup (especially given it sounds like the cleanup is off diagram). That having been said, I can think of one super exciting use for this which is hardware access via a dll. If this tool lets you say "this pointer sized int is a pointer and when the program exits call this function" thats pretty awesome. And much better than the unreserve callback nonsense in the CLFN.

    Quote

    Safe references instead of refnums: G refers to objects with lifetimes using refnums, or typed handles to objects managed by the LabVIEW runtime. Refnums have many of the unsafe characteristics of raw pointers: they must be manually freed to avoid leaks, but freeing a refnum in one place leaves any other places holding the same refnum with a dangling reference. Furthermore, the refnum's object's methods must implement their own thread safety, since they can in theory be called from multiple concurrent places in a G application. Rebar references have none of these problems; the compiler prevents you from using or freeing them unsafely, allowing references to be implemented under the hood as raw pointers, without the runtime needing to manage objects.

    I'm kind of confused by this and it probably comes from my comments at the top about not really getting whats going on under the hood.

    To me the problem with the existing system is that the refnums point at global things. For example, if I allocate a queue inside of A.vi, and I have no terminal which outputs the value of the queue refnum, if A.vi finished executing the compiler SHOULD kill the queue.....except that queues are global. I'd much much much much rather have the compiler figure out the lifetime of the queue for me and kill it when the wire can't go anywhere else. It seems clear that this addon allows for that, which is awesome, but why do we need an addon? I thought the whole point of dataflow was that the compiler knew about the lifetime of data and could make intelligent decisions based on that?

    Quote

    More modern, higher-level language features: Rebar should be able to provide features that appear in other languages, like anonymous functions and closures or iterators and generators, that would be awkward to implement by-value in G or would require adding new refnums to the language.

    I don't know enough about iterators and generators, but why do you think anonymous functions are aided by this? To be honest its not clear to me why anonymous functions aren't possible right now, nor closures.

     

    I think a good summary of my comments above is basically this: Hey, thats pretty neat -- why isn't this already part of the language?

     

    I wanted to finish up with an application-specific question. One of the big use cases for avoiding copies is image processing, but unfortunately imaq images are not only global, but they are not reference counted like named queues (ie worst of both worlds). So if, for example, you are an image acquisition and you want to share your latest snap with all your closest friends (the logging loop, the network loop, the processing loop, the display loop) its difficult to share the references in such a way that when all the other loops are done, the image is freed. If it were a queue, you could obtain several named references to the same queue and the queue only goes away when all the references are dead. I ended up implementing this myself with a DVR, but it still requires that users call the 'release' function -- ideally, and what I think rebar brings to the table, is an automatic release. Does this rebar system solve that? If so, sweet. You should share it with the imaq group :D

     

     

  10. Yeah websockets are easy, theres 12 libraries out there for setting them up using the tcp primitives.

    50 ms delay could be nagle -- if its a closed network, 50 ms is at least 25 round trips, likely more.

    One possibility for the long long delay is if you're using hostname rather than numeric IP address. The hostname lookup is a disaster for any networking code, at least on windows.

  11. When you say cdaq, you mean normal cdaq or one of windows/rt cdaqs? Assuming windows/rt, maybe if you want some basic messages just set up a web service with a named queue to forward messages to. Then you don't need an actor, just call the http api directly (or spawn it as a task).

  12. 11 hours ago, LogMAN said:
    1. Move static functions outside the loop

      Inside the inner while loop are some multiplication functions that multiply values that come from outside the loop. These functions will always produce the same result inside the loop and therefore should be placed outside.
      It's a small change, but stuff like this makes the diagram easier to comprehend.

    Its worth mentioning that LabVIEW should do this for you, its called loop invariant code motion, so don't worry about it for performance reasons.You should put the code wherever its more readable.

    In this case, I'd argue the most readable form is leaving the math where it is, but moving the source values (array size, iteration terminal, control) closer in. I'd also say use a local variable for the control -- the downside to this is that, in fact, using a local variable will stop the loop invariant code motion from happening ūüėē

    In this particular case, I'd like to mention that you can make code more readable through various means -- one is code movement, another is labelling wires or using self-labelled elements like clusters. You can also add comments, but before you go adding comments I would always always look at the code and see if it can be changed to use a different tool to make the meaning more self-evident. To stick with the current section of code:  personally, I find pretty much all iteration-based math to be incomprehensible. I sat here for like a full minute trying to figure out why on earth you were taking max(0, i-10) ...but stepping back for a second, it looks like it should probably just be a case structure with two cases, "..10" and "11..". There is always a case structure in this code, but in your version its hidden inside the max() function and if you use a case structure its explicit. Then you have the problem of "why 10?", but at least anyone reading the code is not stuck back at "why maximum?".

    Along these lines, one thing that might really help you is if you find yourself a buddy and try to explain the code to them (bit by bit, i mean, definitely not all at once). As logman said the application isn't really all that complicated, and I would bet that anyone with a semi-softwarish or logical background could follow along if you walked them through it. The purpose would be to rethink code like the above. If you're sitting there explaining to another human being "well I want the index of this listbox to be 0 up until iteration 10 and then I want it to start incrementing by 1, so what I did is I took the maximum of 0 and the iteration terminal and then built that into an array with 0 and used that as the index" you might start thinking to yourself "man I really need to make this easier to follow".

    • Like 1
  13. 18 hours ago, ensegre said:

    Matlab should be perfectly able to use visual studio afaik, but alas, there is a certain third party library which I managed to import rather easily in linux with gcc, whereas the corresponding windows version of it - well, I haven't even started to bugger about what VS wants from my life there (I might have to, soon... ūüėí)

    Ah yeah I read a note that said a specific visual studio version was no longer supported, should have scrolled down further.

  14. 5 hours ago, JKSH said:

    Side note: I also noticed that MATLAB is using GCC, not Visual Studio -- Look closely, and you'll see the compiler path: C:\TDM-GCC-64\bin\gcc. This means, even though you've told it to use Visual Studio, it's still using GCC.

    Yeah I looked again, I don't think matlab can use visual studio.

    10 hours ago, farzane lk said:

    Seeing the error, it made me wonder if anything would change if I actually connect the device. (till now I was just trying to make the "loadlibrary" command work.)

    Nah this is a parsing problem.

     

    Maybe I'm misreading the error but it looks like it doesn't know what __cdecl is -- __cdecl is visual studio specific. Since I think most compilers use that calling convention by default, you can safely remove all instances of __cdecl from stim.h

  15. I have two answers which might help:

    Answer A: Purchase Teststand and learn it. Without specific details I cant be sure, but it looks like the sort of application teststand is built for. If someone gives you crap about the cost, I'd argue that for sequential things like you've shown here teststand is a lot more maintainable (the bus factor for the code above is likely 1, and potentially 0 after a few months away from it). I'd also add that it has a lot of standardized reporting stuff built in...and if you're calibrating stuff, this would seem to be critical. So seriously, at least let someone from NI spend an hour giving you a demo.

    Answer B: Looks to me like your first step might be to take some of those 50 front panel controls and put them into subVIs that are set to run as dialogs (in VI options, show front panel when called and close front panel when done). Those dialogs can take some of the logic thats in your main loop and organize it a bit -- "this event case is associated with this user input", and so on.  Those dialogs can return small clusters with the configuration (Monitor is a cluster of model[str] and serial[str] and resistance[u16]). If at the end of this you still have a ton of wires, as hooovahh says you can turn that into a clustersaurus/BFC, but its still better organized (clustersaurus->monitor->model vs just having a control called "monitor model" sitting out there).

    Once you have some of what I'm going to call the "non-business logic" (eg the UI logic) out of the way, I think a state machine is a reasonable migration point to start with. I would add a caveat to this, which is you should also learn about different communication tools within labview -- in this specific case, queues and user events (a type of queue which works with the event structure) or "channel wires" which are intended as a simpler wrapper around the queue/event concept). I say this because it looks like there are several long-running tasks without user interaction, so creating parallel loops to run different tasks seems like the next logical step. In general you would use a queue to send data (eg "start running standard cal") from the UI loop to the tasks loop, and use a user event to send information (eg "I'm done" or "an error occurred") from the task loop back to the UI loop. drjdp has a video on some of the considerations for this here although it may be too much for you now -- hes coming at it from the other end "I've been using this pattern for a while and heres where bad stuff happens".

    Once you've mastered this version, and if you feel like its still complex, the next step would be to dive further into frameworks (things like delacor's QMH or drjdp's messenger library  or ni's actor framework which are arguably in order of increasing abstract-ness) -- in these frameworks more stuff happens asynchronously from one another which can make the code more modular (the "standard cal process" is always over in this one library while the user dialog input is always over in this other library) but theres obviously a big learning curve and frameworks tend to require you fit into them rather than the reverse.

     

    • Like 1
  16. 13 hours ago, viSci said:

    The error handling is pretty goÔĽŅod at pointing you to the offending spot in the .h file.¬† I think that the wizard has improved over time so if your last exposure was years ago I would give it another try.

    Fair enough, its definitely been a while. Last I remember I had to have the whole visual C sdk on my machine because it wanted to find some standard header.

    On 2/3/2019 at 10:11 AM, ensegre said:

    I end up still spending some amount of time, often significant, domesticating the .h files provided with the libraries

    To me this is one of the bigger annoyances -- I dont' want to fix up the h file, that seems iffy to me. I suppose it generally doesn't matter since its just use to help the script along, but...

    On 2/3/2019 at 8:30 AM, JKSH said:

    However, unless the C API has anything non-trivial like passing a large struct by-value or strings/arrays, you'll end up having to write your own wrapper DLL, right

    Usually strings and arrays are OK, but yeah big structs can be iffy due to offsets -- that is once nice thing about this LLVM-based tool, is it tells you all the byte offsets so technically one could pass in an appropriately sized buffer and restructure the data...but thats a pain as well.

  17. My understanding from what came up is that Matlab is attempting to build a wrapper dll with some more matlab friendly interface. A quick google didn't find me anything about how it does this, but that seems to be what its doing. I say that because the "we dont know the..." come from the labview headers. So it seems like Matlab is running a compiler against stim.h which includes the platdefines header, and since I'm assuming the matlab compiler is not one of the ones NI was expecting when they built this (or if matlab simply doesn't #define the same definitions NI is expecting), the compiler correctly spits out the errors shown, as in this snippet of platdefines:

    	#ifdef _M_PPC
    		#define ProcessorType	kPPC
    	#elif defined(_M_IX86)
    		#define ProcessorType	kX86
    	#elif defined(_M_X64)
    		#define ProcessorType	kX64
    	#elif defined(_M_ALPHA)
    		#define ProcessorType	kDECAlpha
    	#elif Compiler == kBorlandC
    		#define ProcessorType	kX86
    	#elif defined(_ARM_)
    		#define ProcessorType 	kARM
    	#else
    		#error "We don't know the ProcessorType architecture"
    	#endif

    So the question becomes what should it be. My guess is that they should be defined per the labview exe/dll which is why I suggested the MSVC compiler definitions (as I understand it, this is what NI uses for windows builds). However it may be that the right answer is to edit the headers to set something more appropriate. For example, in the section above, you could comment every line out except for "#define ProcessorType    kX64". Similarly you could comment out the compiler section and replace it with "#define Compiler kGCC" (if that is what matlab uses, which I assume it is). The only reason these need to be defined is so that extcode.h picks up all the appropriate definitions and headers for the platform. For example. stim.h uses the type "uint32_t". This is defined in stdint.h, but if your compiler isn't including it already then stim.h can't be compiled. So, in fundtypes.h, it has a bunch of platform checks to see if it needs to define those types.

    The easier route, vs trying to make extcode.h have the proper types, would be to just edit stim.h directly to include the type definitions you need. I would suggest editing your stim.h to look like this (lines 1-7):

    //#include "extcode.h"
    //this dll only uses two uncertainly defined types, but they are defined by stdint.h. This section is borrowed from fundtypes.h
    #if !defined(_STDINT_H_) && !defined(_STDINT_H) && !defined(_STDINT)
    	#include <stdint.h>
      	//alternatively comment the line above and uncomment these lines:
    	//typedef int             int32_t;
    	//typedef unsigned int    uint32_t;
    #endif
    
    #ifdef __cplusplus
    extern "C" {
    #endif
    .....

     

    • Like 1
  18. If I had to guess, you'll want to edit STIM.h to include (at the very top, before anything else):

    #define _Win64 1 //if 64 bit
    #define _Win32 1 //any windows
    #define _M_AMD64 100 
    #define _M_X64 100 // same as above
    #define __x86_64__ // and again, this might be the gcc version or something?
    #define _MSC_VER 1900 // just a guess, this is ms visual c 2015

    The full list of visual c #defines is here at this link, and you can also just go through the NI header file and look for stuff that makes sense.

    https://docs.microsoft.com/en-us/cpp/preprocessor/predefined-macros?view=vs-2017

  19. just curious if anyone has ever made an attempt to improve scripting out of dlls. I know theres the built in import wizard, but I've literally never seen that work (has anyone here?).

     

    The reason i'm asking is that I was thinking about it and came across this tool: https://github.com/CastXML/CastXML (binaries from the build link at the bottom here: https://github.com/CastXML/CastXMLSuperbuild)

    Essentially it takes a C header (or any C file I guess) and processes it with LLVM to produce an XML tree of all the functions and types. For example it takes the header here: https://github.com/nanomsg/nng/blob/master/include/nng/nng.h

    and produces xml which includes stuff like this:

      <Typedef id="_8" name="ptrdiff_t" type="_285" context="_1" location="f1:51" file="f1" line="51"/>
      <Typedef id="_9" name="wchar_t" type="_286" context="_1" location="f1:90" file="f1" line="90"/>
      <Typedef id="_14" name="intptr_t" type="_285" context="_1" location="f4:182" file="f4" line="182"/>
      <Union id="_78" name="nng_sockaddr" context="_1" location="f6:151" file="f6" line="151" members="_328 _329 _330 _331 _332 _333" size="1088" align="64"/>
      <Struct id="_61" name="nng_aio" context="_1" location="f6:96" file="f6" line="96" incomplete="1"/>
      <Struct id="_68" name="nng_sockaddr_in6" context="_1" location="f6:124" file="f6" line="124" members="_315 _316 _317" size="160" align="16"/>
      <Field id="_315" name="sa_family" type="_25" context="_68" access="public" location="f6:125" file="f6" line="125" offset="0"/>
      <Field id="_316" name="sa_port" type="_25" context="_68" access="public" location="f6:126" file="f6" line="126" offset="16"/>
      <Field id="_317" name="sa_addr" type="_385" context="_68" access="public" location="f6:127" file="f6" line="127" offset="32"/>
      <Typedef id="_79" name="nng_sockaddr" type="_334" context="_1" location="f6:158" file="f6" line="158"/>
      <Enumeration id="_80" name="nng_sockaddr_family" context="_1" location="f6:160" file="f6" line="160" size="32" align="32">
        <EnumValue name="NNG_AF_UNSPEC" init="0"/>
        <EnumValue name="NNG_AF_INPROC" init="1"/>
        <EnumValue name="NNG_AF_IPC" init="2"/>
        <EnumValue name="NNG_AF_INET" init="3"/>
        <EnumValue name="NNG_AF_INET6" init="4"/>
        <EnumValue name="NNG_AF_ZT" init="5"/>
      </Enumeration>
      <Function id="_84" name="nng_close" returns="_293" context="_1" location="f6:194" file="f6" line="194" mangled="?nng_close@@9" attributes="dllimport">
        <Argument type="_55" location="f6:194" file="f6" line="194"/>
      </Function>
      <Function id="_87" name="nng_setopt" returns="_293" context="_1" location="f6:205" file="f6" line="205" mangled="?nng_setopt@@9" attributes="dllimport">
        <Argument type="_55" location="f6:205" file="f6" line="205"/>
        <Argument type="_338" location="f6:205" file="f6" line="205"/>
        <Argument type="_339" location="f6:205" file="f6" line="205"/>
        <Argument type="_5" location="f6:205" file="f6" line="205"/>
      </Function>
      <FundamentalType id="_293" name="int" size="32" align="32"/>
      <FundamentalType id="_294" name="unsigned char" size="8" align="8"/>
      <FundamentalType id="_295" name="unsigned int" size="32" align="32"/>
      <PointerType id="_353" type="_352" size="64" align="64"/>
      <PointerType id="_354" type="_62" size="64" align="64"/>
      <PointerType id="_355" type="_47" size="64" align="64"/>

    Its pretty easy to parse because everything has an ID -- I've got some truly gross code for converting all the struct and enum definitions into labview which is simple enough.

    I guess I'm just testing the waters to see if anyone else thinks this is useful. My usual strategy right now is to minimize the number of C calls I make, but if I could just magically import a header and get the library made for me, it would be pretty cool. But on the other hand, theres so many challenges associated with that which I hadn't thought about going in, like handling structs by value, or what to do with numeric return values (is it an error code? is 0 good or bad?). The particular library I selected as a test case above has two features that are super annoying -- its main 'session' reference is not a pointer like in most C apis, but a struct containing a single int, and it also has a union type..for some reason. So even making this library work, which I thought would be an easy way to try it out, has ballooned in complexity a bit. Thoughts?

  20. Also, this is kind of a side point but the concept is the same. Has anyone noticed that reentrant VIs get super slow sometimes when debugging? Its not always, but I can't figure out what conditions might be causing it. Its like you're going along debugging some code, you step into a reentrant VI, and everything just stops, and it takes like 20-30 seconds for the window to materialize. I know its not just one computer, sadly.

    • Like 1
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.