Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,837
  • Joined

  • Last visited

  • Days Won

    259

Everything posted by Rolf Kalbermatter

  1. That's the status return value of the viRead() function and is meant as a warning "The number of bytes transferred is equal to the requested input count. More data might be available.". And as you can see, viRead() is called for the session COM12 and with a request for 0 bytes, so something is not quite setup right, since a read for 0 bytes is pretty much a "no operation".
  2. Something about the __int64 sounds very wrong! In fact the definition of the structure should really be like this with the #pragma pack() statements replaced with the correct LabVIEW header files. #include "extcode.h" // Some stuff #include "lv_prolog.h" typedef struct { int32 dimSize; double elt[1]; } TD2; typedef TD2 **TD2Hdl; typedef struct { TD2Hdl elt[1]; } TD1; #include "lv_epilog.h" // Remaining code This is because on 32-bit LabVIEW for Windows, structures are packed, but on 64-bit LabVIEW for Windows, they are not. The "lv_prolog.h" file sets the correct packing instruction depending on the platform as defined in "platdefines.h" which is included inside "extcode.h". The __int64 only seems to solve the problem, but by accident. It works by the virtue of LabVIEW only using the lower 32 bits of that number anyway and the fact that x86 CPUs are little endian, so the lower 32-bit of the int64 also happen to be in the same location as the full 32-bit value LabVIEW really expects. But it will go wrong catastrophically if you ever try to compile this code for 32-bit LabVIEW. And if you call any of the LabVIEW manager function defined in "extcode.h" such as the NumericArrayResize() you will also need to link your project with labview.lib (or labviewv.lib for the 32-bit case) inside the cintools directory. As long as you only use datatypes and macros from "extcode.h", this doesn't apply though.
  3. #pragma pack(push,1) typedef struct { int dimSize; double elt[1]; } TD2; typedef TD2 **TD2Hdl; typedef struct { TD2Hdl elt1; } TD1; #pragma pack(pop) extern "C" __declspec(dllexport) void pointertest(TD1 *arg1); MgErr pointertest(TD1 *arg1) { if (!arg1->elt1 || (*arg1->elt1)->dimSize < 2) return mgArgErr; (*arg1->elt1)->elt[0] = 3.1; (*arg1->elt1)->elt[1] = 4.2; } Defensive programming would use at least this extra code. Note the extra test that the handle is not NULL before testing the dimSize, since the array handle itself can be legitimately NULL, if you happen to assign an empty array to it on the diagram Altneratively you should really make sure to properly resize the array with LabVIEW manager functions before attempting to write into them, just as ned mentioned: MgErr pointertest(TD1 *arg1) { MgErr err = NumericArrayResize(fD, 1, (UHandle*)&arg1->elt1, 2); if (err == noErr) { (*arg1->elt1)->elt[0] = 3.1; (*arg1->elt1)->elt[1] = 4.2; } return err; }
  4. I'm afraid your conclusion is very true, especially if you only plan to build this one system. It would be probably a different situation if you had to build a few dozen, but that is not how this usually works.
  5. The IMAQ datatype is a special thing. It is in fact a refnum that "refers" to a memory location that holds the entire image information, and that is not just the pixeldata itself but also additional information such as ROI, scaling, calibration, etc. Just as passing a file refnum to a file function does not pass a copy of the file to the function to operate on, so does passing an IMAQ refnum not create a copy of the data. It at most creates a copy of the refnum (and increments an internal refcount in the actual image data structure). The IMAQ control does the same. It increases the refcount so the image stays in memory, and decreases the refcount for the previous image when another IMAQ refnum is written into the control. And there is a good reason that NI decided to use a refnum type for images. If it would operate on them by value just as with other wire data, you would be pretty hard pressured to process even moderately sized images on a normal computer. And it would get terribly slow too, if at every wire branching, LabIEW would start to create a new by value image and copy all the potentially 100MB and more data from the original image into that copy. And if you wire a true constant to the destroy all? input on the IMAQ Destroy function this simply tells IMAQ to actually destroy any and every image that is currently allocated by IMAQ. And if you do that you can in fact save yourself the trouble of calling this function in a loop multiple times to destroy each IMAQ refnum individually. But yes, it will basically destroy any and every IMAQ refnum currently in memory, so there is no surprise that your IMAQ control suddenly turns blank as the image it displays is yanked out of memory under its feet. And why would they have added this option to the IMAQ Destroy? Well, it's pretty usual to create temporary images during image analysis functions and give them a specific name. If they don't exist they will be created and once they are in memory they will be looked up by their name and reused. So you don't typically want to destroy them after every image analysis round but just let them hang around in memory to be reused in the next execution of the analysis routine. But then to properly destroy them at the end of the application, you would have to store them in some queue or buffer somewhere to refer to them just before exiting and pass that refnum explicitly to the IMAQ Destroy function. Instead you can simply call IMAQ Destroy with that boolean set to true, to destroy any IMAQ refnums that were left lingering around.
  6. There is a reason the NI interfaces are so expensive. You need to be a member of the Profibus International group to receive all the necessary information and be allowed to sell products which claim to be Profibus compatible. And that costs a yearly fee. While the hardware is indeed based on an RS-485 physical layer there are specific provisions in the master hardware that must guarantee certain things like proper failure handling and correct protocol timing. There have been two Open Source projects that tried to implement a Profibus master implementation. One is the pbmaster project which seems to have completely disappeared from the net and was a Linux based driver library to run with cheap RS-232 to 485 converter interfaces or specific serial controller interface chips. I suppose with enough effort there is a chance that one might be able to get this to work on a NI Linux based cRIO, but it won't be trivial. The main part of this project was a kernel device driver with a hardware specific component that did directly interface to the serial port chip. To get this to interface to a normal RS-485 interface on the cRIO (either as a C module or through the built in RS-485 interface that some higher end cRIOs have, would require some tinkering with the C sources for sure. The other project is ProfiM on sourceforge which seems to have been more or less abandoned since 2004 with the exception of an update in 2009 which added a win2k/xp device driver. This project is however very Windows specific and there is no chance to adapt this to a cRIO without more or less a complete rewrite of the software. Unfortunately this is about as far as it seems to go for cheap Profibus support. While the binary protocol for the Profibus is actually documented and you can download the specs for it, or study the source code of these two projects to get an idea, the Profibus protocol timing is critical enough that it will be difficult to simulate with a purely user space based implementation such as using VISA to interface to a standard interface. Certain aspects of the protocol almost certainly need to be implemented in the kernel space to work reliably enough, or another alternative would be to implement the Profibus protocol on the FPGA in the cRIO, but that is also a major development effort.
  7. LabVIEW creates a fixed set of GDI objects on start and then as needed when it draws something on the screen, and also offscreen when you work with Picture control or print something. In my work with LabVIEW I haven't really seen LabVIEW itself leaking GDI objects for quite a few years. However if you interface to external components such as ActiveX, .Net or DLL functions, that of course does not mean anything. They can create and not properly deallocate GDI objects as much as they like. DETT only can look into LabVIEW resources itself, not into resources allocated by those external components. The way to go after this is to get an idea of the rate of GDI object increase and trying to relate that to certain operations in your application. Then starting to selectively disable code parts until the object count doesn't increase steadily anymore. From there concur and divide by disabling smaller and smaller parts of code until you get a pretty good idea about the location.
  8. You should be more specific. Various people have attached code to their postings. And the initial library from siva, while the links on lavag.org in his earlier mails got trashed by the two lava crashes that the site had in its 15 or so years of operation, has been posted to github as he wrote in this post. You just need to advance to the second page of this thread and read it in its entirety.
  9. That sounds like a pretty lame excuse. The FPGA has very little to do with the fact that DAQmx wouldn't be portable to cRIO and in fact it is available on various cRIO systems nowadays. Please note that the LabVIEW version that KB article refers to is for version 7.1 and shows a DAQmx version 9.8 dialog while the current DAQmx version is 16.0. The problem is that the USB-DAQ systems are not supported on DAQmx for cRIO systems. The reason for that are probably manifold but the fact that every type of subdriver in DAQmx is a considerable effort and that cRIO systems have alternative DAQ options already plays most likely an important role. Trying to get this working yourself by communicating on USB Raw level is an exercise in vain. First you would need to get the actual USB protocol description for the USB-6366. With the exception of a few very simple low cost devices, NI never has published protocol specs for those devices. There was a tutorial like article in the past, that explained the creation of an USB Raw interface driver in LabVIEW with one of the USB-900x devices, but I can't find that right now. However those are low speed and very simple devices that likely do not use any features like USB interrupt pipes or any other modes than bulk transfers. With the USB-6366 this is very likely different, as you can't support continuous and reliable multi-megasample per second transfers through a simple bulk transfer pipe. You typically have to use (multiple) isochronous endpoints for that and likely some interrupt pipe endpoints too, for the signaling and protocol handshake. This document points out that you would not need to do the inf driver wizard magic for non-Windows targets. On Mac it just works if the device is not claimed by a driver already and on Linux you have to make sure that it gets mounted as an usbfs device. This should also apply for the Linux RT cRIO targets. If the cRIO are however one of the older VxWorks or even Pharlap ETS based devices you can anyhow forget about it immediately. They don't support USB Raw communication at all! The only way to get a custom USB device working on them is to actually write a custom USB kernel driver for those systems, which requires the according development system for Pharlap ETS or VxWorks, which is a major investment on its own, not even accounting for the trouble of getting acquainted with development of kernel device drivers on those highly specialized OSes. But inf driver wizard or not, the real work only starts after that. You have to use VISA functions to write the correctly formatted data packets to the different communication endpoints in the device and receive the answers from it. This is very tedious low level work for any non-trivial USB device, even if you happen to have a complete bit for bit protocol description for it. It is a sure way for insanity, to try to do without such a protocol description. These protocols are usually not just some text commands that you send to the device like with traditional GPIB, or RS-232 devices. The exception to that are USBTMC devices which implement a higher level service that allows to send SCPI and IEEE-488.2 compatible string commands. But for USBTMC devices you don't need to do anything special in terms of VISA communication. You simply address them with the USB::INSTR resource name instead of USB::RAW, and then communicate with them like any other SCPI/IEEE488.2 like device. But there is no reason for NI to implement USBTMC for their DAQ devices and consequently they haven't done so. This saves a very complex command interpreter in the device and therefore makes it possible to use a smaller and cheaper embedded processor in the device. Also by using a fully binary protocol, the USB bandwidth is better utilized.
  10. That still won't work as intended by the OP. As long as the receiver socket has free buffer it will accept and acknowledge packets, making the sender socket never timeout on a write! This is not UDP, where a message datagram is considered a unique object, that will be delivered to the receiver as single unit, even if he requests a larger buffer, and even if there are in fact more message datagrams in the socket buffer that could fit into the requested buffer. TCP/IP is a stream protocol. No matter how many small datapackets you send (not talking about Nagle for a moment) as long as the receiver socket has buffer space available, it will copy it into that buffer, appending to any already waiting data in the buffer, and the receiver can then read it in one single go at once, or in any sized parts it desires. So if the receiver has a 4k buffer, it will cache about 53 packets of 76 bytes each from the sender before sending a NAK to the sender socket for any more packets. Only then will the write start to timeout on the sender side, after having filled its own outgoing socket buffer too. And then you need to read those 53 packets at the client before you get the first fairly recent package. Sounds to me not like a very reliable throttling mechanisme at all! Of course you could make the sender close the connection, once it sees a TCP Write timeout error, which will eventually give a connection aborted by peer error on the receiver side, but assuming the 4k receive buffer example above and a 100ms interval for sending packets, it will take more than 5s for the sender to see that the receiver is not reading the messages anymore and being able to abort. If the receiver starts to read more data in that time, it will still see old data and having to read them all until the TCP Read function times out, to be sure to have the latest value. And that assumes a 4k buffer. Typical socket implementation nowadays use 64 k buffers and more. Modern Windows versions actually use an adaptive buffer size, meaning it will increase the buffer beyond the configured default value as needed for fast data transfer. This should not likely come into play here as sending 76 byte chunks of data every few ms is not considered fast data at all, but it shows you that the receive buffer size for a socket is on many modern systems more like a recommendation than a clear limit.
  11. The quick answer is: It depends! And any more elaborate answer boils down to the same conclusion! Basically the single most advantage of a 64 bit executable is if your program uses lots of memory. With modern computers having more than 4GB of memory, it is unlikely that your application is trashing the swap file substantially even if you get towards the 2GB memory limit for 32 bit applications. So I would not expect any noticeable performance improvement either. But it may allow you to process larger images that are impossible to work with in 32 bit. Other than that there are very few substantial differences. Definitely in terms of performance you should not expect a significant change at all. Some CPU instructions are quicker in 64 bit mode since it can process 64 bits in one single go, while in 32 bit mode this would require 2 CPU cycles. But that advantage is usually made insignificant by the fact that all addresses are also 64 bit big and therefore a single address load instruction moves double the amount of data and therefore the caches are also filled double as fast. This of course might not apply to specially optimized 64 bit code sections for a particular algorithm, but your typical LabVIEW application does not consist of specially crafted algorithms to make optimal use of 64 bit mode, but instead is a huge collection of pretty standard routines that simply do their thing and will basically operate exactly the same in both 32 bit and 64 bit mode. If your application is sluggish, this is likely because of either hardware that is simply not able to perform the required operations well within the time you would wish, or maybe more likely some programming errors, like un-throttled loops, extensive and unnecessary disk IO, frequent rebuilding of indices or selection list, building of large arrays by appending a new element every time, or synchronization issues. So far just about every application I have been looking at because of performance troubles, did one or more of the aforementioned things, with maybe one single exception where it simply was meant to process really huge amounts of data like images. Trying to solve such problems by throwing better hardware at it is a non-optimal solution, but changing to 64 bit to solve them is a completely wasteful exercise.
  12. You're definitely trying to abuse a feature of the TCP communication here in order to fit square pegs into a round hole. Your requirements make little sense. 1) You don't care about loosing data from the sender (not sending it is also loosing it) but you insist on using a reliable transport protocol (TCP/IP). 2) The client should control what the server does, but it does not do so by explicitly telling the server, but instead you rely on the buffer full message at the client side to propagate back to the server, hoping that that will work. For 1), the use of UDP is definitely useful. For 2), the buffering in TCP/IP is not meant nor reliable for this purpose. The buffering in TCP/IP is designed to never allow for the possibility that data gets lost on the way without generating an error on at least one side of the connection. It's design is in fact pretty much orthogonal to your requirement to use it as a throttling mechanisme. While you could set the buffer size to sort of make it behave the way you want, by only allowing a buffer for one message on both the client and server side, this is a pretty bad idea in general. First, you still would have to send at least two buffers, with one being stored on the client socket driver and the other in the server socket driver. Only allocating half the message as buffer size to only have one full message stored, would likely not work at all and generally generate errors all the time. But it gets worse: any particular socket implementation is not required to honor your request exactly. What it is required to do is to guarantee that a message up to the buffer size can not get corrupted or spuriously lost due to some buffer overflow, but it is absolutely free to reserve a bigger buffer than you specify, for performance reasons for instance, or by always reserving a buffer with a size that is a power of 2 bytes long. Also it requires your client to know in advance what the message length is, limits your protocol to only work in the intended way when every transmission is exactly this size, and believe me, at some time in the future you will go and change that message length on the server side and forget on the client side to make the according correction. Sit down and think about your intended implementation. It may seem that it would involve more work to implement an explicit client to server message that can tell the server to start sending periodic updates or stop them, (a single command with the interval as parameter would be already enough, an interval of -1 could then mean to stop sending data), but this is a much more reliable and future safe implementation than what you describe. Jumping through hoops in order to fit square pegs into round holes is never a solution.
  13. The multiple icons in a single icon resource are only meant for different resolutions, but really all represent the same icon. If the Windows explorer needs to display icons it retrieves the icon resource and looks for the resolution (eg. 32 * 32 pixels, or 16 * 16 for a small icon and if it can't find it, it retrieves the one closest to that resolution and rescales it, which often looks suboptimal. In order to have multiple icons in an executable you have to add multiple icon resources into the executable, each with its own resource identifier (the number you have to put behind the comma in the registry). The application builder does not provide for a means to do that, but there are many resource editors out there, both as part of development systems such as Visual Studio or LabWindows CVI as well as standalone versions. If you look for standalone versions beware however, many download sites for such tools nowadays are less than honest and either pack lots of adware into the download or outright badware that you definitely do not want to have on your computer.
  14. It's simple: How would you want to implement a multi selection case structure when using strings, that should select between "a".. "f" and "f" .. "z"? One of the two ends has to be non-inclusive if you want to allow "flying" to match the selection too. It would be unpractical to let the string selection only work if the incoming string matches exactly (eg. "f1" would not match anything in above sample!.
  15. Well, as already mentioned it is hard to say anything specific here from just watching that spastic movie. I haven't seen spontaneous execution highlighting myself, but your mentioning that shutting down the application can take very long and usually crashes, would support the possibility that you have Call Library Nodes in your application that are not correctly configured and when they get called, they consistently trash your memory in a certain way. Buffer overflows are the most common problem happening here, where you do not provide (large enough) buffers to the Call Library Node parameters for the shared library who wants to write information in there. This will result in corrupted memory and the possible outcome can be anything from an immediate crash to a delayed crash at a later seemingly unrelated point in time, including when you shutdown LabVIEW, and then while it tries to clean up the memory, it stumbles over trashed pointers and data objects. It also could sneakily overwrite memory that is used in calculations in your application and in that way produce slightly to wildly different results, than what you expect, or as in this case write over the memory that controls the execution highlighting. So check your application for VIs containing Call Library Nodes (and while the NI drivers do use quite a lot of Call Library Nodes, you should in a first scan disregard them, they are generally very well debugged and and many million times tried, so it is unlikely that something is wrong in that part unless you got a corrupted installation somehow. Then when you located the parts in your application that might be the culprit, start disabling sections of your code using the conditional disable structure until you don't see any strange happenings including no crash or similar thing during the exit of LabVIEW.
  16. This is basically asking the wrong question in the wrong way . The LabVIEW diagram is always drawn as a vector graphic, but the icons are bitmaps. But yes the coordinates of the diagram are pixels, not some arbitrary high resolution unit like mixels (micro-meter resolution or whatever). Changing that in current LabVIEW would be a major investment that is not going to happen.
  17. We recently came across this problem. Not really Report Generation Toolkit related but in our own library to interface to Excel. Microsoft seems to have changed the interface to the Save and SaveAs methods once again in Office 2016. LabVIEW as a statically compiled system implements the interface to ActiveX as a static dispatch interface at runtime. Only at compile time does it reevaluate the method interface to the actually installed type library on the current computer. This is a choice that works fine in most cases and is pretty fast performance wise but fails if someone changes the ActiveX interface of a component and you try to call that component from LabVIEW without wanting to bother about the actual version that is installed on the final target system. Microsoft however does not provide compatibility methods that support the old interface, since they feel that the ActiveX dynamic dispatch capability, where a caller can find out about and construct the necessary dispatch interface at runtime, makes that unnecessary. Unfortunately ActiveX is considered both by Microsoft and NI as a legacy technology so neither party has much interest to invest any time at all into this beyond keeping it working the same as until now. Basically there is no easy solution for this. You have to compile an app on a computer that uses the same office version as what the target computer will use. The only way around that is to actually create separate wrappers for the Save methods on two different computers with each their own version of MS office installed and then in your app determine the actual Office version that is installed and then invoking the correct VI dynamically. A possible but painful workaround. For older Office versions I believe NI already incorporated such a fix into the RGT Toolkit for the save method (and used a somewhat sneaky trick to avoid accidental recompilation of the relevant dynamic VIs during an application build, which would adapt them to whatever Office version is currently installed on the machine) but obviously this hasn't been updated for Office 2016 yet. But it's a maintenance nightmare for sure for them, but the alternative of implementing runtime dynamic dispatch to Active X methods would be a major investment with several possible problems for existing application in terms of performance, and that is very unlikely to happen, since ActiveX is already considered a legacy technology for about a decade.
  18. Except that from the languages I do know, only Java, C# and C++14 support a deprecated keyword. Yes you can do it with gcc with the __attribute((deprecated)) and with MSVC with the __declspec(deprecated) keyword too, but that is a compiler toolchain specific extension which is not portable. So for many languages it either ends up as a custom decorator of a specific library (Python or Lua can do that) or it's just a comment that nobody will read anyhow!
  19. It would actually help if you saved the VIs for previous. I haven't installed LabVIEW 2016 here. However as far as calling the function in the header file is concerned, something like this should definitely work, if you didn't mess up the configuration of the DLL build script (which I can't control for lack of LabVIEW 2016). #include "Password.h" #define BUF_LENGTH 100 char input[] = "some text"; char output[BUF_LENGTH]; MakePassword(inpupt, output, BUF_LENGTH); printf("This is the converted text: %s", output); I'm not sure about your C programming experience, but if you have little to none, the trick is most likely in providing a valid buffer for the output string and not expect the function to handle that automatically like you are used in LabVIEW!
  20. Reading a bit further on the technical mailing list it seems there was an initial clash of some sorts between a few people who were on the two opposite ends of wanting to get code into the kernel and wanting to maintain a clean kernel source code base. Both points are pretty understandable and both sides have sort of resolved to some name calling in the initial phase. Then they sat down together and actually started working through it in a pretty constructive manner. None of the latter seems to have been picked up by the mainstream slashdotted media, and the quick and often snarky comments of more or very often less knowledgeable people, concentrated mostly on that initial fallout. It's very understandable that the kernel maintainer didn't want to commit 100k lines of code into the kernel code just like that. Apparently the AMD guys didn't expect that to happen anyways and actually were more of proposing the code as it was as a first RFC style submission, after having worked a bit to long in the shadows on the huge code base. They didn't however make this clear enough when submitting the code. And the maintainer was a bit quick and short in his answer. In hindsight the way this was handled from both sides isn't necessarily optimal, but there is no way this code would have been committed if Linus himself would still control the kernel sources. Yes developing for the Linux kernel is a very painful process if you are used to other device driver development such as on Windows. There you develop against a rigidly defined (albeit also frequently changing) kernel device interface. In Windows 3.1 and Windows 95 days you absolutely had to write assembly code to be able to write a device driver (VxD), in Windows 98 and 2000 they introduced the WDM model which replaced both the VxD and NT driver model completely. With Windows Vista the WDF framework was introduced, which is supposed to take away many of the shortcomings that the WDM model had for the ever increasing complexity of interactions between hardware drivers, such as power saving operations, suspend, or also IO cancelation or pure user mode drivers. In the Linux kernel, a driver is normally an inherent part of the kernel sources. As such they are very tightly coupled with the specific kernel interfaces, which is not static at all but changes as needed and all the drivers in the kernel source then have to be modified to adhere to these changes. It's obviously a very different development model than what you see in closed source OSes but there is something to be said about not involving several layers of complex intermediate abstraction that are generally difficult to debug and even more difficult to keep in sync with modifications on both the upper and lower boundary of each layer. It's not a very good model in terms of scaling when adding many different device drivers, as a single change in the kernel interface will easily require to change every single device driver in the source tree too, but that's what the kernel developers decided in. The alternative is to not only strictly specify the interface between the kernel and user space, which the Linux kernel does, although often with a twist in that they seem to prefer to do it in a different way to BSD and other Unix variants, seemingly for the sake of being different, but also define a stringent and static device driver API inside the kernel that will never change except maybe between major kernel versions. And even-though that might seem like a good idea for device driver developers as it would allow closed source drivers that won't need to be recompiled with every kernel upgrade, it's also a model that requires enormous architectural work upfront, before any driver can be written, only to find out that at the time the interface has been defined and the necessary infrastructure has been developed, that it is already obsolete. Another factor that might play in here is that the only way a device driver is actually easy to maintain in such a development model, is by actually open sourcing it, which is of course one of the main motivations of GNU in general and the Linux kernel especially. Unfortunately this leaves users of hardware that the manufacturer doesn't want to document openly, such as most NI hardware too, pretty much in the cold when they want to use Linux. One can get angry at NI or the Linux kernel guys that they each maintain their position, but that doesn't help and in the end both sides have the right to deal in this as they wish and currently do. Linux is not going to have a static device driver interface and while that is a pita for anyone not wanting to donate their device driver source with lots of their own support and sweat into the kernel mainline, it's how the Linux world is going to work for as long as there are people who want to work on Linux. It seems that many open source developers favor this model also outside the rather confined albeit extensive kernel development, but they forget that if you write a library you are not operating in a closed environment such as the kernel sources, but in a world where others will actually want to interface to that library and then arbitrarily changing the contract of a library API, simply because it is convenient to do so, should not be something that is considered without the uttermost care
  21. First check that your host provider does allow external connections to the database server. Almost every webspace provider nowadays lets you choose to install mySQL, (usually as MariaDB now) in your webspace environment so you can implement webstores, blogs and what else on your hosted website. However most do not allow connections to that database from outside the virtual website environment for security reasons. Once you determined that such external connections are allowed you have to determine which type of database server is used. Besides mySQL (MariaDB), you can also get hosted database servers based on MS-SQL or possibly even Oracle, for some high throughput commercial services, and that will largely influence the possible selection of your interfacing strategy. The SQL Toolkit you so profoundly excluded would support almost all possible servers. Alternatives are LabSQL, which is based on the same ADO interface that the SQL Toolkit uses, or the ADO-Tool. Depending on the used server you might also get lucky with the mySQL native driver from Saphir.
  22. There are certainly problems with storing and retrieving fractional seconds from database timestamps and that depends on database, according database driver and such. We had many trouble with that on MS-SQL and Oracle in the past and the only thing that works reliably across various versions of databases is to use stored procedures that take either a custom number format or the fractional and second part as separate numbers and then using DB specific functions to combine the two values into a native timestamp. Both ODBC and ADO/DAO lack an unified standard for this that all database drivers would support and traditionally the timestamp was only supporting full second resolution in ODBC and accordingly ADO, as well as in most database servers. You can't really blame the database toolkit for this, since it is really a pretty thin wrapper around ADO, and can't make up for historical shortcomings of the underlying infrastructure. As to implementing a native T-SQL protocol through TCP/IP VIs, there is at least one library out there that is definitely not as extensive and well tested as the Saphir toolkit for MySQL but also workable. The problem about the T-SQL protocol or more precisely TDS is that it is not fully documented. The Open Source implementation in C, called FreeTDS is based in part on an older public specification of an older Sybase SQL server version which MS-SQL server is derived from. That documentation is for version 4.2 of the TDS protocol but the current MS-SQL Server versions use version 7.4. MS has added various extensions to it since the 4.2 version and current MS SQL Servers refuse to connect with a TDS client that doesn't support at least 7.0. Quite a bit of the 5.0 and higher support in FreeTDS was basically reverse engineered through network logs and as such can be considered working for many cases but likely isn't fully protocol compliant. While one can implement a native LabVIEW library for the TDS protocol using the TCP/IP VIs, this would have to be based in large parts on the openly available protocol documentation of the TDS 4.2 specification with extra info from the preliminary protocol description in the FreeTDS documentation, possibly helped by peeks into the FreeTDS source code. But that source code is under GPL license, so looking to much at that code is not a good idea to implement a non GPL implementation of said protocol. Also an additional problem with trying to implement this in pure LabVIEW is the fact that newer protocol versions add various encryption and compression features, that are not easily implemented in pure LabVIEW.
  23. I'm not sure you can blame Linux for this. Packed Library is an entirely LabVIEW specific feature. It's basically a ZIP archive with an executable header for reasons I don't know. It definitely is not instantiated through OS loader code, so the executable header looks mostly be tacked on for version resource purposes. It seems unneccessary. The entire code in packed libraries is LabVIEW specific code, the precompiled executable code for the VI and optionally the VI diagram for debugging purposes. The loading and linking of these code resources is done entirely through LabVIEW itself. So what the problem with packed libraries on NI Linux RT systems is, I have no idea.
  24. What do you expect? The DSC system is a collection of many shared libraries that work together. Someone cracked the Windows version apparently that you are using. The realtime variants of those shared libraries have to be different because they can't rely on the same Windows API (Pharlap ETS) or are for completely different CPUs (VxWorks) or operating system (NI Linux RT). They can't use the license manager as used for the Windows version, but I'm sure NI is smart enough to employ some kind of protection there too. But you will be hard pressured to find a script kiddie who has such hardware available and is willing and able to crack this for you. Asking in a public forum about this is definitely not the smartest move you can do!
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.