Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,776
  • Joined

  • Last visited

  • Days Won

    243

Everything posted by Rolf Kalbermatter

  1. It depends on your definition of safe! 😃 If the VI does enforce proper data types (through its connector pane for instance) and accounts for the size of the target buffer or adjusts it properly (for instance by using the minimum size in the Call Library Node to use a different parameter as size indicator, or explicitly resize the target buffer to the required size) this can be VERY safe. Of course it is not safe in the sense that any noob can go into that VI and sabotage it, but hey to make things foolproof requires an immense effort, and that is the overhead of the Typecast function. 😁 But to make things engineer proof is absolutely impossible! 😀 Also a memcpy() call is only functionally equivalent to a Typecast on Big Endian machines. For LabVIEW that applied "only" to Mac68K. MacPPC, SunSparc, HPUnix PARisc, Silicon Graphics Irix, IBM AIX, DEC Alpha and VxWorks (of whose not all were ever officially released). The only LabVIEW platforms that really use Little Endian are the ones based on i386/AMD64 and ARM CPUs, which are the only platforms that currently still are shipping. For me it really depends. I use it often in functions that deal with binary communication (if they use Big Endian binary format, otherwise the Flatten/Unflatten is always preferable). Here the additional overhead of the Typecast functions is usually insignificant in comparison to the time the overall software has to wait for responses from the other side. Even with typical TCP communications and 1Gb or higher fiber connections, your Read function sits generally there for several milliseconds to receive the next data package. Shaving off a few nanoseconds or even microseconds from the overall execution time is really completely insignificant in this case. If you talk about serial communication or similar, things get even more insignificant. For shared library interfacing and data processing like image handling and similar, the situation is often different and here I tend to always use memory copies whenever possible, unless I need to do specific endian handling. Then I use Flatten/Unflatten as that is very convenient to employ a specific endianness.
  2. Take a look at Figure 10 in the datasheet. What you see there is that the positive edge of CNVST starts the SAR. The DOUT then goes after a certain amount of time (t12) that is needed for the SAR to low. This indicates the readiness of the data on DOUT. After a time (t3) the first MSB is output (and could be a low too). You then need to sample the DOUT and after you read it you can immediately assert SCLK. The positive edge of this indicates to the ADC that you have read the data and it can output the next data bit. Then you deassert the SCLK signal after (t4). The data bit is not later than (t4) after the falling edge available. You can then read this and assert the SCLK again but not faster than (t8) after te falling edge of SCLK. So the general sequence looks like this 1 ) assert CNVST and deassert SCLK 2) dassert CNVST not faster than (t11) = 20 ns later 3) wait for DOUT to go low (this takes time as in this time the SAR is ongoing, this is not 10 ns but at most t12 = 525 ns, but since you are waiting for this you have to wait at least as much, but more safe is simply to wait until DOUT goes low 4) wait at least 10 ns more after DOUT got low. 5) Read DOUT, this is your first bit 6) Assert SCLK 7) wait at least 4.5 ns 8 ) deassert SCLK 9) Read DOUT, this is your next bit 10) wait at least another 4.5 ns 11) Go back to 6) until you have read all your bits. The timing is only critical in terms of not going faster than the times mentioned here. You can certainly wait longer if your FPGA loop timing makes this more convenient. The 4.5 ns high and low time for the SCLK is a minimum, but of you don't go to the maximum sample rate (determined by the CNVST frequency you can also clock out the data slower. You simply need to have enough time to clock out the data before you start the next CNVST. Those 4.5 ns wait I would simply make whatever your single cycle loop time is. That can be 5 or more ns, with the default FPGA clock of 40 MHz this would be 25 ns. With such a SCL loop driven at 40 MHz your maximum sample frequency would be about 525 + 10 + 16 * 50 ns = 1335 ns ~ 750 kHz
  3. Something looks odd in that timing diagram. As you are the SPI master you should generate the CNVST and SCLK signal, right? And then you should be starting to generate the clock signal after the SAR Conversion time, which is depending on your chip and XCLK or similar. The first upgoing (or falling) edge of SCLK should cause the ADC to output the first bit of data, and you should then sample the SDO input in your FPGA on the other edge to make sure you read a steady state and not some transient. What you show is that you expect the ADC to start streaming the data on its own and then you start generating the clock, but that can't work. This is the typical timing for a ADS8528 chip. I chose to have the CONV pin active high for the entire duration of what the worst case SAR processing time is supposed to be based on the XCLK that is separately provided to the ADC. But it is not important for this chip as it starts the SAR operation on the rising edge of this signal and expects CONV to be low before the /FS is activated but will not care about when the CONV pin goes low between those two moments. The falling edge of the /FS pin (SPI chip select) starts the data cycle and on the falling time of the SCLK, also generated from the FPGA, I sample the SDO pin. The rising edge of the SCLK will signal to the ADC to output the next data bit. There are variations such as not each chip may use the /FS signal (or it might always be activated). But the principle is the same, activate the CONV to initiate a SAR, then wait for at least the worst case time the SAR will take and then start clocking the data bits. But you as master must clock these they won't just appear magically.
  4. According to this https://blog.cloudflare.com/when-tcp-sockets-refuse-to-die/ it would seem that listener sockets are not supposed to linger around. Still you should probably be prepared that a Create Listener right after a Close Listener can fail just to be on the safe side!
  5. The Internetcine Avoider should only come into play when you use the high level Listen.vi. If you use directly the Create Listener and Wait on Listener primitives there is no Internetcine Avoider unless you add it yourself.. But, Wait on Listener CAN return other errors than timeout errors and that usually means something got seriously messed up with the underlaying listener socket and the most prudent action is almost always to close that socket and open a new one. Except of course that when you close a listener socket it doesn't just go out of existence in a blink. It usually stays present in the underlaying socket library for a certain timeout period to catch potential late arriving connection requests and respond to them with a RST/NACK response to let the remote side know that it is not valid anymore. And together with the SO_EXCLUSIVEADDRUSE flag this makes new requests to create a socket on the same port fail with an according error, since the port is technically still in use by that half dead socket. That socket gets eventually deleted and then a new Create Listener call on that port will succeed, unless someone else was able to grab it first. And even if you stay entirely within the same system and there is no actual network card packet driver involved, can the socket library reset itself, for instance when a system service or the user does some reconfiguration of the network configuration. But if your code doesn't do something like this: do { err = CreateListener(&listenRefnum); if (!err) { do { err = WaitOnListener(listenRefnum, waitInterval, &connectionRefnum); if (!err) { CreateNewConnectionHandler(connectionRefnum); } else if (err != timeout) { // if we have any other error than timout, leave the loop // which will close the listener and go back to create a new one LogError(err); break; } } while (!quit); Close(listenRefnum); } else { LogError(err); Delay(someWaitTime); } } while (!quit); it will keep trying to listen on a socket that might have been long going into an error condition.
  6. Yes it takes some time after closing the refnum until the socket has gone through the entire RST, SYN, FIN handshaking cycle with associated timeouts. And that is even true if nobody has been connecting to the listener at that point to request a new connection. So with the SO_EXCLUSIVEADDRUSE flag you can end up having the listener to fail multiple times to create a new socket on the specified port. The alternative of not using exclusive mode is however in my opinion not really a good option. And the Internecine Avoider actually is a potential culprit in the observed problem of the OP. It doesn't really close the socket but rather tries to reuse it. The internal check if the refnum is valid, is in fact not really checking that the socket has not been in error, just that LabVIEW has still a valid refnum, the socket this refnum refers to may still be in an unrecoverable error and keep failing. To recover from a (admittingly rarely occurring) socket library error on the listener socket, the socket needs to be closed. And that means that a socket that has been opened with SO_EXCLUSIVEADDRUSE may actually be blocked from being reopened for up to a minute or more. But trying to reuse the failed socket is even worse as that will never recover. If Wait on Listener fails with any other error than a timeout error, you should close the listener refnum and try to reopen it until it succeeds or the user exits the application/operation.
  7. That about the Internecine Avoider is only true if you use the high level TCP Listener.vi. I usually use the low level primitives Create Listener and Wait on Listener instead (and always close the refnum if I detect any error other than timeout). The SO_EXCLUSIVEADDRUSE is in principle a good thing, you do not usually want someone else to be able to capture your port number.
  8. If the socket library or one of its TCP/IP provider sub components resets itself, for whatever reason, it is definitely possible that a listener could report an error. This could happen because the library detected an unrecoverable error (TCP/IP is considered such an essential service on modern platforms that a simple crash is absolutely not acceptable whenever it can be avoided somehow) or even when you or some system component reconfigures the TCP/IP configuration somehow. My TCP/IP listeners are actually a loop that sits there and waits on incoming connections as long as the wait returns only a timeout error. Any other error will close the listener refnum and loop back to the Create Listener before going again into the Wait Listener state. The Wait on Listener doesn't return an error cluster just to report that there is no new connection yet (timeout error 56) but effectively can return other errors from the socket library, even though that is rare. In case of any other errors than timeout, I immediately close the refnum, do a short delay to not let the loop monopolize the thread if the socket library should have another condition than a temporary hiccup, and then go back to Create Listener state until that succeeds. It's a fairly simple state machine but essential to continuous TCP/IP operation. Technical details: the Wait on Listener basically does a select() (or possibly poll()) on the underlaying listener socket and this is the function that can fail if the socket library gets into a hiccup.
  9. Not the VI itself but if you have enabled to separate compiled code from the VI and since it is at a different path location, it is considered different to the original VI as far as the compile cache is concerned. And therefore since there is no compile cache entry for that VI yet, LabVIEW will recompile the VI.
  10. Basically all OpenG libraries before version 4.0 were LGPL licensed. With 4.0 the license for the VI part was changed to be BSD-3. The libraries which use a shared library/DLL have different licenses for the shared library and the VIs. The shared library remained LGPL which should not be a problem as long as you post a link to the OpenG project. For libraries version 4.0 and higher this is the git link mentioned by Jim, for older libraries this is the sourceforge link.
  11. As mentioned. the library was a quick hack to another earlier library to add the bitwise operators. And it was likely a bit a to quick hack, messing up a few other things in the process. As you don't use bitwise operators I would recommend you to look at the original library, to which a link is included in that post.
  12. No flame from me for this. Under your constraint (only ever write from one place and never anywhere else) it is a valid use case. However beware of doing that for huge data. This will not just incur memory overhead but also performance, as the ENTIRE global is everytime copied even if you do right after an index array to read only one element from the huge array.
  13. Nope! You have to do it like in the lower picture. And while the order "should" not matter, it's after all the intend of using reference counts to not allow a client to dispose an object before all other clients have closed it too, I try to always first close the sub objects and then the owner of them (just as you did). There are assemblies and especially ActiveX automation servers out there who don't properly do ref counting and may spuriously crash if you don't do it in the right order.
  14. I can't right now work on that. But I have plans to do that in the coming months. The story behind it is that I did a little more than just to make it 64-bits. - The file IO operations where all rewritten to be part of the library itself rather than relying on LabVIEW file IO. While LabVIEW 8.0 and newer supports reading and writing files that are bigger than 2GB, it still has the awful habit to use internally old OS file IOs that are naturally limited to only supporting characters in file names that are part of your current local and they also normally are limited to 260 character long path names. If your drive is formatted in FAT32, that is all the drive can do for you anyhow, but except for USB thumb drives, you would be hard pressured to find any FAT formatted drives anymore. So having these limitations in the library feels very bad. These two things are specially a problem on Windows. Mac is slightly less problematic and Linux has long ago pretty much solved it all internally in the kernel and surrounding system libraries. - Modern ZIP files support things like symbolic links and I wanted a way to support them. For Linux and Mac that is a piece of cake. For Windows I may for now not be able to seamlessly support that as creating symlinks under Windows is a privileged action, so the user has to either be elevated or you have to set an obscure Developer flag in Windows that allows all users to create symlinks. So in summary there was a lot of work to be done, most of it actually for Windows. Most of that is done but testing all that is a very frustrating job. And the non-Windows targets will then also take some more time for additional testing and making things that were modified for Windows compile again properly. So yes, it's still on my to-do list and I'm planning to work on it again, but right now I have another project that requires my attention. Because of the significant changes in the underlying shared library and internal organization of the VIs it will be almost certainly version 5.0. The official library API (those nice VIs with a green gift box in them) should remain compatible but if you want to make use of the new path name feature to fully support long path names with full character support, you may have to change to the new API, with the library specific path type, although if you use high level library functions, internal long path names will be ok, you just won't be able to access them with the normal LabVIEW file functions if they contain non local ANSI characters or are to long! It's the best I could come up with without the ability to actually changing the LabVIEW source code itself to add that feature into the internal Path Manager in LabVIEW. 😀 The according File Utilities Manager functions in the library will also be available for the user in a separate palette.
  15. Lets suppose you create a .Net Image object. That image can potentially use many megabytes of memory. Any reference you obtain for that image will refer to the same image of course so references don't multiply the memory for your image, but LabVIEW will need to create a unique refnum object to hold on to that reference and that uses some memory, a few dozen bytes at most. However every such refnum holds a reference to the object and an object only is marked for garbage collection (for .Net) or self destructed (for ActiveX) once every single reference to it has been closed. So leaving a LabVIEW refnum to such an object open will keep that object in memory until LabVIEW itself terminates the VI hierarchy in which that refnum was created/obtained/opened, as LabVIEW does register every single refnum in respect to the top level VI in whose hierarchy the refnum was created and when that top level VI goes idle (terminates execution), the refnum is closed and the underlying reference is disposed. And to make matters even worse, if such an object somehow obtained a reference to one or more other objects, those objects will remain in memory too until the object holding those references is closed, and that can go over many hierarchy levels like this, so a single lower level object can potentially keep your entire object hierarchy in memory. If and how an object does that is however specific to that object and seldom properly disclosed in the documentation, so diligently closing every single refnum as soon as possible is the best way to make this manageable. Yes, aside for real UI programming I consider use of locals and globals a real sin! Ah oui carrément ! Vous n'utilisez jamais de variables globale ou local ? Vous faites que des FGV? In fact the only globals I allow in my programs nowadays are booleans to control the shutdown of an entire system or "constants" that are initialized once at startup from a configuration file for instance and NEVER after. The rest is handled with tasks (similar to actors) and data is generally transferred between them through messages (which can happen over queues, notifiers, or even network connections. Locals are often needed when programming UI components as you may have to update a control or indicator or read it outside of its specific event case, but replacing dataflow with access to locals in pure functional VIs is a sure way to get a harsh remark in any review I would do. And while I have been a strong supporter of FGVs in the past I do not recommend them anymore. They are better than globals if properly implemented (which means not just a get and set method, which is just as bad as a global, but NEVER EVER any read-modify cycle outside of the FGV.). But they get awkward once you do not just have one single set of data to manage but want to handle an array of such sets, which is quite often. Once you get there you want to have a more database like method to store them, rather than trying to prop them into an FGV.
  16. The problem in this specific case is not about memory, although a refnum uses more than just a pointer, but not much more (the underlying object may however use tons of memory!). The problem is rather that it is very easy to lose the overview of which local (or global) is what and where it is initialized and where does it need to be deallocated. Yes, aside for real UI programming I consider use of locals and globals a real sin!
  17. That's most likely because of this in the OpenSSL headers: # if OPENSSL_API_COMPAT < 0x10100000L These functions were required to be called in OpenSSL before 1.1.0but since 1.1.0 OpenSSL automatically initializes its engines on the first call of any function that creates a context of similar session. And since 1.1.0 is already EOL too and you should either use 1.1.1 or even better 3.0.x, it would be indeed strange if your OpenSSL library still included those APIs. I think the old libssleay.so still contained it in 1.0.1 but it was unnecessary to call that, but when changing the shared library names they also axed many of those compatibility hacks too. You should probably call OPENSSL_init_crypto(OPENSSL_INIT_ADD_ALL_CIPHERS | OPENSSL_INIT_ADD_ALL_DIGESTS | OPENSSL_INIT_LOAD_CONFIG, NULL); and yes this means reading the headers to see what numeric values those constant defines have.
  18. Yes, absolutely. Each property(or method) node returning a refnum will increment its object refcount. So if you have two properties "CameraInfo" they may reference the same object but are in fact different references to the same object instance. In fact they use double memory, once for the LabVIEW refnum itself to manage the reference (an int32 pus some extra information including the underlaying .Net or ActiveX reference pointer) and the actual object data space. That object will only go out of memory when ALL the references were closed and that means you have to close the according LabVIEW refnum so it can inform .Net or ActiveX that that reference is not needed anymore. There are other memory leaks actually in your code. First you assign the instance refnum for the originally opened Camera object to the NET Camera local control then you Open a new reference to a new camera object and assign it to the same local control, losing the original refnum object, so you can never again close it yourself (LabVIEW eventually will when your code hierarchy in which you executed the Constructor goes idle, but that is typically only when you finish your program. Generally using locals (or even worse globals, shudder) to store refnums is a very bad idea. It makes proper life time control of objects rather hard and error prone.
  19. Please note that unless you use a very old LabVIEW version, there is actually no need to put the Close Reference node into a loop. It perfectly will accept an array of refnums too. As to worrying about if a Close Reference node is necessary or not during a code review, tell the reviewer that there are more important things to worry. 😃 If that are the only things he can complain about, he is either worthless as a code reviewer or your code is prime quality. 😀 For ActiveX (and .Net) refnums it absolutely and positively is important to avoid potentially huge memory hogs (not really memory leaks as LabVIEW still knows about the objects, they just never get disposed until you terminate your application). One of the VI reference close functions marked as questionable also is needed, the other not! ActiveX and .Net objects are refcounted and will often prevent not just themselves to be deallocated but also parent objects, depending if the child object has somehow acquired a reference to the parent for some reason. Any such object wanting to retain access to another object is required to obtain a reference to it which will increase the refcount and only once each reference to an object has been properly closed, will the object actually be released. LabVIEW as simply being another client of the ActiveX or .Net object needs to follow that rule too, so if you don't close the LabVIEW refnum, it won't release the underlying reference to the object and any objects that this object has referenced will also stay in memory. So rather than erring on the wrong side I prefer to err on the good side and simply put a Close Reference anywhere without even spending a Joule of energy to reason if it is really needed. That little extra muscle exercise in the fingers is not that bad. A Close Reference on a refnum that does not need to be closed is simply a NOP (No Operation) in LabVIEW (well factually it is checking a dynamic attribute of the refnum and skipping any attempt to close the refnum if it doesn't need to but that check is about as expensive as your "Not A Number, Path, Refnum" node). So from all the marked Close functions the ones being marked OK are indeed needed. From all the others only one can be definitely deleted (the array of control refnums obtained by the Control Reference Constants) . All the ActiveX closes are definitely needed and the refnum returned from the Controls[] property of the Cluster refnum might be maybe not required. But I wouldn't want to go and spend any energy to find out about that so I would leave it in, which would only make one Close being clearly useless (the uppermost).
  20. You can use the Scan Engine which by default has a scan interval of 10 ms if I"m not mistaken. It can go down to 1 ms but that should not be used unless you are knowing exactly what you are doing. However some of the C modules do not have Scan Engine support so you need to check that.
  21. If that 100k image per second is not a typo, you are definitely not just pushing the limits of what modern PC hardware can do, but in fact operating in lala land. Even super high speed cameras don't get that high.
  22. That's the other possibility. The LabVIEW wrapper is basically using futures (not like Java futures, which are more like an object callback, but more like Python __future__ which are features that are expected to be introduced with the next version as a stable function or interface or were discontinued). As to getting pre built binaries from a distribution like Ubuntu for such draft builts, that is very unlikely unless someone at Ubuntu had decided that it is a totally and completely unmissable feature for their platform, which I doubt the zmq draft items every could amount to.
  23. What is your question? 😀 Yes we do have customers wanting to have test software written in Python. Not just since this year. It is not my preferred development platform, despite having written LabPython only about 20 years ago. But I just recently did support for a project that is supposed to do image analysis in Python, but my task was to do the image analysis based on a Matlab script but implement it in C as a shared library to call from Python through ctypes, since the routines ported to Python were to slow for the desired test throughput.
  24. How long a license is valid is actually part of the activation process. In the according license response there is either a "permanent", an explicit expiry date or "1-jan-1900" for an unactivated license. I believe that in intention most volume licenses were for a long time already meant to be a lease with explicit expiry date but the license servers from NI often issued a permanent license anyhow. If that was an oversight, a technical problem or some forced process because of compatibility problems I do not know. It was obviously not very clearly communicated in many cases. Not so for Alliance Member leases. That was pretty clear that it is a limited term license and would not let you run the software anymore once you stop paying. So in principle you would have to find the original paperwork, see what it stated back then and then analyse everything. It could be that you did in the past indeed receive a perpetual license with an academic volume license. But if it doesn't say so, you will have a tough stand. The fact that you still can use your old LabVIEW versions without having to reactivate them is not proof that you have actually a perpetual license. It could be also due to a failure or omission when configuring the license servers in the past. If you can proof that it was in the past really a perpetual license that your institution ordered and paid for, because it states so in the invoice or quotes you received then, you might have a chance to appeal the silent change to a lease. It would be a change of contract and NI would have had to clearly inform you about that and offer your institute the option to terminate the agreement. But without such proof, your chances to change anything now are pretty much non existent.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.