-
Posts
3,903 -
Joined
-
Last visited
-
Days Won
269
Content Type
Profiles
Forums
Downloads
Gallery
Posts posted by Rolf Kalbermatter
-
-
- Popular Post
- Popular Post
Hey everyone,
the last response here was almost a year ago, so I wonder if there was a solution for the problem? I am sill working with LV2012 und currently running into problems with memory, that I can't release with the "Request Deallocation" function.
Thank you very much.
Generally, "Request Deallocation" is not the right solution for problems like this, even if it works more agressive than it did in previous versions, which I'm not aware of that it does.
You should first think about the algorithm and consider changing the data format or storage to a better suited way. LabVIEW is not C! In C you not only can control memory to the last byte allocated, you actually have to and if you miss one location you create memory leaks or buffer overflows. In LabVIEW you generally don't control memory explicitly and can't even do so to the byte level at all. It's similar with other high level languages where memory allocation and deallocation is mostly handled by the compiler system rather than the code programmer.
Writing a program in C that is guaranteed to not leak any memory or do buffer overwrites ever is a major exercise that only few programmers manage to do after lots and lots of debugging. So having this pushed into the compiler which can be formally designed and tested to do the right things is a major step into better programs. It takes away some control and tends to cost memory due to things like lazy dealloaction (mostly for performance reasons but also sometimes as defensive means since especially in multithreading environments there is not always a 100% guarantee that deallocating a memory area would be a safe operation at that point).
Request Deallocation basically has to stop any and all LabVIEW threads to make sure that no race condition can occur when a block marked as currently unused is freed while another thread attempts to reuse that block again. As such it is a total performance killer, not only because of causing many extra memory allocations and deallocations but also because it has to stop just about anything in LabVIEW for the duration of doing its work.
-
3
-
Hello;
I am doing my senior project in controlling extruder machine using LabView. I did my design and simulation using SIMULINK but I need to implement some transfer functions in labview for decoupling the process.
and i implement labview 2011 with tool control design and simulation, but there seems eror in wired "wire: is a member of cycle"
is there any solution for this, please help
Duplicate of post http://lavag.org/topic/18653-control-design-and-simulation-error-wire-is-a-member-of-cycle/ and Neil already was quicker and posted the solution with Feedback Nodes there.
Yes, inserting a Feedback Node will allow you to connect wires in circles. You just need to be aware that this won't be an indefinite delta t but instead rather use your loop timing as dt since the Feedback Node will store the value from one loop iteration and feed it back to the input in the next loop iteration.
-
Hello,
finally I got the solution to my problem but I didn't post it here, so I do now to help anyone that could encounter this problem. I'm not sure if the COM object is non-thread safe, but the fact is that if I mark the VI to run in UI thread everything goes smooth...
Regards.
Well the COM automation server should have been installed with a setting that specifies what threading model it is happy with. LabVIEW honors that setting. The common threading models that are available in COM would be:
- single threading, this component can only be called from the main thread of the application that loaded the component. The LabVIEW UI thread coincidentally also is its main thread.
- apartment threading, this component can be called from any thread but during the lifetime of any object it must be always the same thread
- free threading, the component is fully threading safe and can be called from any thread at any time in any combination
Most Automation servers require apartment threading, often not so much because it is required than simply because it is the default and many developers are to lazy to verify if their component would run with a less restrictive threading and more serious, if their component might actually require a more restrictive threading model.
-
Hello,
I've been using Snap7 library for a while in the client role, so I can communicate with SIEMENS PLCs There is specific LabVIEW info here: http://snap7.sourceforge.net/labview.html
I can successfully use the Cli VIs to read/write from PLCs DBs (a DB is just a memory block). For reading I pre-allocate a buffer, but I'm not sure if I'm doing it properly. My main concern is about memory leakage, I don't know if the memory will be freed after the VI exits or it will not be freed as long as the VI is in memory.
Attached is what I have just now for reading. I define a typedef for each DB, it is a cluster of booleans, and reserve memory according to the number of booleans, but always in multiple of 8 (I read only bytes). I request the data with the CliDBRead and after that I make some conversions to return the data in a variant that I later cast to the proper typedef.
Can you tell me if this is the best way to do this?
Thank you!
LabVIEW is a fully managed environment. As such you do not have to worry about explicit memory deallocation if you are not calling into external code that allocates memory itself.
You're not doing anything like that here, but use LabVIEW functions to create those buffers so LabVIEW is fully aware of them and will manage them automatically.
There is one optimization though you can do. The Flatten to String function most likely serves no purpose at all. It's not clear what data type the constant 8 might have and if it is not an U8 it might even cause problems as the buffer that gets created is most likely bigger than what you need and the Plan7 function will therefore read more data from the datablock. This would in the best case cause performance degradation because more data needs to be transferred each time than is necessary and could cause possibly errors if the resulting read operation causes to go beyond the datablock limit.
And if it is an unsigned 8bit integer constant, a Byte Array to String function would do the same but more performant and clear.
-
You can add obfuscation (not real security). For example you could XOR your password with a second password.
And store that second password in the application!
Granted it would maybe get the lazy adversary to think he got the password already and then give up if it doesn't work but it would be almost no win against any determined adversary.
-
This might be of interest to those of you who support applications on Windows 7 or who use Windows 7 VMs for development work.
http://redmondmag.com/articles/2014/10/27/windows-7-sales-to-consumers.aspx
Your title is misleading :-).
It's only the consumer version of Windows 7 that are discontinued. Most Windows 7 computers sold nowadays are anyways Windows 7 Professional systems for the professional use. Microsoft promised to keep selling them and to give at least one year of notice before they stop doing that.
-
Hi labviewbeginner, I hope this is not too late in reply.
I think the problem is with Byte Alignment (or Data Alignment), particularly with structs in C
See: http://digital.ni.com/public.nsf/allkb/F7E5C9169D09E98586256AF300717B33
Your C compiler is configured to use a different Data Alignment (1 byte, 2 byte, 4 bytes, 8 bytes, etc) than your LabVIEW version is set up to use.
See: http://en.wikipedia.org/wiki/Data_structure_alignment
although wikipedia is not that clear.
It simply means that the compiler will pad your struct sizes to fit into a multiple of the byte alignment, hence a different size is reported when you try to copy memory. And the error when the size is too big (unexpectedly).
Its a quick change in your C compiler....compile with the correct data alignment, or try them all to see whan you get the alignment the same as is generated by LabVIEW
Hope this helps
Believe me, alignment is the smaller of the problems he is encountering. The bigger problem is the lack of understanding C pointers, strings and all that and that LabVIEW strings are something very different than a C string. That together with proper memory allocation and deallocation rules for any pointer you use.
I lost at some point my patience and found that ned was doing a better job in trying to teach him a little about C programming than I could bring up to do, so left it at that.
-
I have complained before about Error 6 but am still of the opinion that it is better than having files outside of the build.
I would be nice if NI could do the build in a temp directory (as close to the root as possible) to avoid this, then copy it back to the dist location - all transparently (I currently have to use a script to do this manually).
Technically there would be a way to solve this in LabVIEW for almost all instances. The Windows API has generally ANSI functions and Unicode functions for all File IO functions. If they would choose to use explicitedly the Unicode variant internally and make the LabVIEW path to Windows path function a tiny bit smarter they could have solved that years ago for all OSes since Win2K.
Since Paths are in LabVIEW a distinct datatype the entire modification could have been done transparently for LabVIEW diagrams without compatibility problems. Why they didn't do it back in LabVIEW 8.0 when they modified all file functions to work with 64 bit file offsets to allow for file access for files <2GB is still not clear to me. It would have been an invasive chance in the LabVIEW source code only with almost no chance for any negative influences for LabVIEW applications.
Changing the File functions to use Unicode paths internally would IMHO have been a less involved exercise than supporting 64 bit file offsets throughout. Not to say that 64 bit offsets are not important (they are) but allowing paths to go beyond 260 characters is also a pretty important feature nowadays (and would have been a great opportunity to support 10 years ago when this generally wasn't a real issue for most computer installations).
-
1
-
-
RolfK, that is an interesting point of view. I agree, the only advantage is to stay within LabVIEW the whole time and every developer may decide, how big of an advantage that is for him. For me it is a big one because otherwise I would have to document and maintain a complete second toolchain.
However, I think the following is not true:
"LoadLibrary() loads the DLL into the calling process", which is our process. That is also the process, into which the original third-party DLL is loaded, so there are no other processes involved. The whole concept of callbacks would not work if there were multiple processes, because one process can not call a subroutine within another process.
Unfortunately I also doubt the statement about a separate "LabVIEW runtime process", because I never heard of such a thing and found no reference to it, neither on NI nor on the internet. As I understand it, a runtime-engine is a library that is used by a process (our LabVIEW executable) not a process itself (e.g. a server process).
The interesting question remains: What happens, when a process uses two different LabVIEW-runtime-engines?
This obviously would be the case, if I compile the EXE in LV2013 and the DLL in LV2012. But it would also be the case if a C program uses two LabVIEW-DLLs compile with different versions of LabVIEW. And because the latter surely must work, I expect the former also to. Nonetheless I will certainly test that in the next days and post my findings here.
When moving platforms one has to recompile no matter what method is used. After all, a C wrapper is platform dependent and last but not least, the original third party DLL is, too.
One interesting thought to close with:
Why do we even have to create a wrapper-DLL? A nice feature would be to have LabVIEW "export" certain VIs in an EXE, too. This way one could just use the GetProcAddress() without loading a DLL at all. Maybe this way the callback VI could even be run within the same application instance?
Well there is a chance that it is not really an out of process invocation. But a LabVIEW DLL consists of 3 things.
1) a stub loader
2) a C compiled wrapper for each function
3) the compiled LabVIEW VI (and subVIs) for each function
The stub loader is responsible to locate the right LabVIEW runtime and load it but does skip that if it determines that the current calling process already contains a compatible LabVIEW runtime of the same version. This can be also the runtime environment of the LabVIEW IDE. This not only speeds up the initialization of the DLL but also allows more efficient passing of function parameters since otherwise all parameters need to be copied from the calling environment into the DLL environment, as Shaun has quoted already.
So while the LabVIEW DLL is loaded into the process, it is not entirely sure if the LabVIEW runtime is also loaded into the same process if the stub loader determines that the current LabVIEW runtime kernel can not execute the VIs in the DLL because of version differences. While possible it certainly poses some difficulties in terms of heap management but out of process invocation would also cause its own kind of challenges.
-
I'm with Shaun here: warnings in their current state are ineffective due to their overwhelming nature. I can't remember the last time I paid attention to the warning list.
I regularly do to find hidden objects in a diagram of inherited code
but normally ignore the useless unconnected terminals warning. This last one was maybe sort of useful in old days but nowadays with event structure and what else I often end up with terminals just laying around in the according event structure without ever needing to be connected.
For the rest I'm with everyone else here: Don't treat an analysis result like race conditions as a breaking error. It's a serious warning (unlike unconnected terminals) but certainly shouldn't break code execution by its own.
-
Results!
I investigated option 3 (communication via tcp/ip). It's about twice as fast when using six instruments - but only after setting the super secret TCP_NODELAY option on the tcp socket with a call to wsock32.dll
After talking to Keysight about the problems I experienced when talking USB, they said they saw something similar but only when using ViBufRead calls, which is the default in VISA. You can also see those calls when running the NI I/O Trace software.
From the VISA write help:
So, ignoring the warning about the potential performance hit, I switched to synchronous I/O mode and all is well.
No more crashes! Hooray
Geez, I completely forgot about asynchronous VISA. It has been years that I had to tinker with that because of flaky devices drivers.
-
One previous article, almost the same situation as mine.
http://forums.ni.com/t5/LabVIEW/Access-violation-0xC0000005-at-EIP-0x013C7314/td-p/2258546
http://www.ni.com/white-paper/13164/en/#213279_by_Category the Issue # 213279, still unsolved.
I'm not sure whether anything related with my problem.
Someone helped me test my program with LV2013, and the crash still happened.
I guess somehow the dynamic property nodes being stuck at the previous address of the pointer,
so it loaded the wrong value, or crashed since the address of pointer is no more in computer,
or occupied by other program(maybe OS itself).
I just don't understand, why rename the dynamic dispatch property node could save it , and why this problem happened again and again at different class dynamic dispatch property node.
Are you using any Call Library Nodes in your code? If they are from NI libraries they are likely ok but anything else is suspect. Badly configured Call Library Nodes or buggy external shared libraries have the potential to create exactly such problems.
And LVOOP has the potential to be extra susceptible to such corruptions, but unless your LabVIEW installation itself got corrupted somehow is by no means the cause of them.
-
I hope to keep the memory footprint down, but since the application is a test system that simultaneously tests 100's of DUTs in parallel (each DUT getting it's own instance of a test executive), the data consumption can add up.
The current system uses ~9MB per DUT plus overhead of 66MB for the whole system. I suspect the new system will exceed this a bit. So, assuming 100MB of overhead and 10MB per DUT, that puts me at 5.1GB for 500 DUTs (that is my target maximum).
So, it is possible that I could benefit from a larger memory space. Need to get the new system completed and do some testing to confirm this.
10 MB per DUT fully multiplied by the number of DUTs! That make me believe that you might have been setting reentrant on all VIs just to be on the safe side. While possible and LabVIEW being nowadays able to handle full reentrancy this is not a very scalable decision. Reentrancy by parallel VI hierarchies is often unavoidable but that should be an informed decision on a per VI case, and not a global setting.
-
Ok so "FlashErrorText()" crashes the LabView. The Readme file says:
But I don't know how to declare String with 120 characters in LabView. Should I put manually 120 characters in it and connect to library node input?
Maybe I should connect uint8_t array with 120 elements and change the input parameter from Cstr to Array?
Simply use an InitializeArray node with the datatype being U8 and length >=120, then convert it to a String with Byte Array to String.
Changing the Call Library Node parameter to be an array of U8 would work too, but you would end up with a string that always contains 120 characters, but only the characters until the first NULL byte are meaningful. If you configure the Call Library Node to be a CStr pointer LabVIEW will take care on return to scan the string for this NULL character and limit the string length to be the correct size.
-
For the last 3 years our application has been maintained in LV2011. We have recently decided to migrate it to LV2014 so I am currently testing the sources (and executable) in LV2014 to make sure everything still works.
I noticed that when I run the application in the sources, the main HMI (which is the start vi) displays the menu and tool bars even though the appearance settings are such that it shouldn't be displayed. Plus it's not like they are really activated either since I can't click on anything, it looks like a pure display glitch. In addition, as soon as I resize the vi, they disappear and the HMI looks the way I expect it.
The same thing happens with the executable, except the area corresponding to the menu and tool bars displays the Windows desktop instead, like if you could see through the VI... Once again, things get back to normal when I resize the VI.
I've never seen that with previous versions of LabVIEW. Could it be a bug introduced in LV2013 or LV2014? Have you ever heard of somebody having a similar issue?
Thanks!
I've seen similar things occcasionally also in at least LV2013 but nothing that I could pinpoint and so far I think only in applications I received from others and most likely also were originally from an earlier version.
-
That means that there is still something wrong. Either one or more Call Library nodes are still configured wrong or there is a bug in the flash dll somewhere. The most likely culprit is a badly configured Call Library Node.
Have you made sure that any function which returns information in a string or array buffer is called with a properly allocated buffer? If your buffer is even one byte smaller than what the function is going to try to write into, it will inevitably overwrite some memory that will destroy something. This can often result in a 1097 error if the overwriting is serious enough but also can go unnoticed until you try to close LabVIEW and when trying to clean everything up, it stumbles over the corrupted pointers. Or it can crash somewhere between where the overwriting happens and closing of LabVIEW. And if the overwriting is not affecting pointers it may be in data that your program uses to do calculations elsewhere.
-
Careful. You most certainly will not have a full 4 GB to use. In practice I've never got close to the limit because dynamic memory allocations begin failing long before getting there. Chances are if you're using that much memory, it's not with a bunch of scalars. I get nervous when I see memory footprints nearing 2 GB for LabVIEW.
From what John describes in the first post, I would be surprised if his application gets even remotely close to 1GB of memory consumption. In my experience you only get above that when Vision starts getting involved. That or highly inefficient programming with large data arrays that get graphed, analysed and what else in a stacked sequence programming style.
-
OK, so you are now at the same point I am. The two first commands returns 0, and disconnect returns 1 (the first command should return 1 even if you don't have hardware - I test that with cmd exe program prepared by supplier).
You can test that too. Download zip attached and run cmd. Go to the unzipped directory in cmd. When you are there write: RunBatchFile test.bat and press enter. You should see that first command SetupandConnect will pass.
I forgot to say, you have to edit first line of "test.bat", and put your localization to conf.ocd, for example I have:
"C:\Users\stn_dak\Desktop\FlashAccess\conf.ocd" bolded and underlined text will be different on your PC, depends where you unpacked the zip.
Well!!!! If you add a call to FlashErrorText() after each failed function call you will find out that it first reports after the FlashSetupAndConnect():
C:\Program Files (x86)\National Instruments\LabVIEW 2013\cpu.ini does not exist.
then after the FlashErase():
Requires Prior Call to FlashSetup()
which is logical since the SetupAndConnect call had failed.
So what does this tell us?
The flashaccess.dll attempts to find the file cpu.ini in the directory for the current executable.
Unless there is a way to tell the DLL in the ocd file to look for this elsewhere, you may be required to put this file in the directory where your LabVIEW.exe file resides (and if you build an executable , also into the directory where your executable will be). Basically it is a bit stupid from the DLL to look for this in the executable directory only and not at least also in the DLL directory itself, but alas such is life.
-
I remove the dynamic path, but I don't have any error or broken arrow. The VI behave the same like with dynamic path.
See above in my edited post. And from the example project you included it seems that cdecl is indeed the right calling convention.
-
Thank you for very quick answer!
I'm not so experienced programmer like you and I will be thankful If you will say something more what you mean by:
"The DLL doesn't load on my system...."
Well I got rid of the dynamic path in the diagram and simply pointed the Call Library Node to the DLL on my system and then LabVIEW ends up with broken arrow for the VI claiming it couldn't load the DLL.
Well disregard this remark. Typical PEBCAK problem
. Should have noticed that the VI got opened in LabVIEW 64 Bit rather than 32 Bit.
I have edited the VI in a way that I should think should work. Seems that LabVIEW feels the functions need to be called as cdecl. Not sure why since the assembly code seems to hint otherwise, but whatever.
I now get a return value of 1 for the Disconnect call, which sounds not to bad.
Obviously other than what you believe, the FlashSetupAndConnect() call has to fail on a system with no hardware to connect to!
Just adapt the path generation to your conf file in a way that is working for your installation.
-
I've never had FPGA code compile in 5 minutes...... Even simply benchmark tests take upwards of 30 minutes for me.... I'm jelly.
What memory do you have in your machine? For the FPGA compiler it REALLY makes a difference if you can throw more memory at it.
-
Hello,
I write here a topic, because I'm looking for a help since I can not deal with some issue for a long time. I'm trying to write LabView program which will program uC flash using JTAG usbWiggler manufactured by Macraigor Systems. The manufacturer provides customers with program to do whole staff connected with programming, but it's stand alone app and I would like to build the programming process into the test sequence performed by LabView (program + test). Additionally the manufacturer provides a library to call various functions related to programming, erasing etc. I tried to write a program to use dll but no success. There is some small cmd exe program which is using the dll (also created by the supplier), and its working, but there are some bugs in it, and I cannot change it or make some error handling so that's why I decided to use dll alone. I attached dll, dll readme file, ocd file (configuration file for uC), and VIs.
I checked the first command called: "FlashSetupAndConnect" it's working without hardware, so you are able to test something. (checked with cmd exe program, without device it always pass if input data is ok, it fails when something is missing like bad path file etc.) Status from each function should be "1", and I get "0" all the time.
The function should looks like this (copied from readme file):
int FlashProgrammer_SetupAndConnect(char *ocd_filename, char *device_name, char *device_address, unsigned long baud_rate,
unsigned long jtag_speed);
I will be very grateful for any answer and advices.
Dawid K.
There is absolutely no need for all the pointer stuff you are doing. The Call Library Node is very capable of translating LabVIEW strings into C string pointers for the duration of the Call Library Node call. Your own managed pointers would only be necessary if the lifetime of that pointer is required to last beyond the call of the function itself. So get rid of all the pointer acrobatic and just use the code in the true case.
The DLL doesn't load on my system since it is compiled using Borland C and probably requires the Borland C Runtime library installed on the computer, which I have no plans to install on my system.
However taking a quick look at the assembly would indicate that it might be compiled to use stdcall convention for all its functions. The header files or the MS Visual C example mentioned in the documentation would certainly help to verify what calling convention is supposed to be the right one.
Also the return value of those functions is defined to be int, which under all modern Windows versions is a 32 bit integer. Your function thinking it's a int16 certainly might miss some interesting bits that way.
Look at this declaration in the documentation:
int FlashErase(int start_sector, int end_sector);
An int is still a 32 bit integer, and not a 64 bit integer as you have decided to make it here for the parameters (and still use an int16 for the return value)!!!!
Last but not least:
If you have several files to attach, with some of them being not possible to post because of their ending being rejected it is quite a good idea to pack everything into a ZIP file and post that rather than renaming files to make them appear as something they are not, and having to explain how to do all the renaming back in order to get the right files.
-
Hi,
I have several USB instruments (Agilent/Keysight optical power meters) which I can talk to via USB.
To minimise the time "wasted" by transferring data between the instruments and the PC I would like to query them in parallel. Unfortunately, LabVIEW doesn't agree with that strategy and reliably crashes when doing so. It doesn't matter which command I send, so here's a sample snippet, where I just query the instrument ID repeatedly. I don't even have to read the answer back (doing so won't make a difference):
This will kill LabVIEW 2012 and 2014, both 64bit without even popping up the "We apologize for the inconvenience" crash reporter dialog.
Has anyone had similar experiences?
I've seen LabVIEW crash while communicating over RS232 (VISA) but it's much harder to reproduce.
Is it outrageous to assume that communication to separate instruments via different VISA references should work in parallel?
All my instrument drivers are separate objects. I can ensure that communication to a single type of instrument is done in series by making the vi that does the communication non-reentrant. But I have to communicate with multiple instruments of different types, most of which use some flavour of VISA (RS232, USB, GPIB).
Am I just lucky that I haven't had more crashes when I'm talking to a lot of instruments?
Could it be a bug specific to the USB part of VISA? I've only recently changed from GPIB to USB on those power meters to get faster data transfer rates. In the past everything went via GPIB, which isn't a parallel communication protocol anyway afaik.
Tom
Depends what instruments that were. The key here is that they are USB, and lacking any specific USB Raw setup in your diagram, must be Virtual Comm devices, which means VISA does in fact very little itself other than talking to the Windows COMM API which then calls into either the standard Windows Virtual COMM USB driver or a specific Agilent/Keysight virtual device driver. Which one it is I have no idea.
While VISA may be part of the problem I have seen all kinds of weird and unpleasant things happening with Virtual COMM USB drivers from various manufacturers. I have seen very little problem with parallel or any other type of VISA communication with other devices than COMM USB devices and since VISA really just treats them as any other serial port the problem very likely has to be searched in the USB COMM device driver, either the Windows standard driver or most likely a vendor specific device driver for the instrument you are using.
Basically your instrument is pretty much the same as any of those RS-232 to USB converter dongles, and there it makes a big difference if one uses a noname product with unknown internal controller or one based on for instance the FTDI solution. While none of the standard drivers that come with the SDK for such chips is really meant to be distributed by OEMs to their clients, most (especially no-name manufacturers) do so anyhow as you really can't hire a programmer to improve a driver when you earn basically nothing on the sale of a product already and from the ones I've seen only the FTDI driver works reasonably well enough to not crash under any but very ideal situations.
Another indication for this is the fact that LabVIEW simply disappears. No crash that can be produced in user space only is really able to terminate a process in such a way under modern Windows systems. This only can happen if a kernel driver is violating some critical system integrity while being called by the process directly or indirectly. And the only kernel component aside from normal Windows kernel handling in this setup would be the USB Virtual COMM port driver or some other part of the USB driver stack.
This really only leaves two options for the cause of this crash: A buggy chipset driver for your system itself or a buggy USB virtual comm driver for your instruments. Both of them are completely out of control of VISA and even more so for LabVIEW.
And while USB can potentially allow faster communication speeds than GPIB it is even less parallel than GPIB. In USB each bit has to go through the same line, while with GPIB there are 8 parallel datalines. Also both USB and GPIB do allow to communicate to several devices quasi-parallel. And since the USB port is really just used as a virtual COMM port in these cases the bit speed (baudrate) is typically limited to values way beyond what you could reach with GPIB.
-
Thanks the information. : )
Now I'm using LV2011.
This situation sometimes not just ouput the wrong data, getting worse, it crashed Labview.
The erroe remained , crahed LV again and again.......even I reboot the computer......
I google the error code(Access Violation, 0xC000005),
seems not only me but happened everywhere.(Maybe not the same, but very similar.)
I'll read more articles to see more discussions about it.
Access Violation is a generic error that is generated by the CPU itself when a code execution causes the CPU to access a memory address that the virtual memory manager does not recognize as being assigned to the current process. Often it is a NULL pointer that is referenced, but any badly initialized pointer can be the culprit. It simply means that something got corrupted in the application memory, but there is no way to determine how it happened from the access violation exception information alone.
-
1
-
Anybody out there know the status of LuaVIEW?
in LabVIEW General
Posted
It's not yet released. Initially we had planned to add runtime support for NI realtime targets to LuaVIEW. This seemed a fairly straight forward exercise but turned out to be a major effort, especially with the recent addition of two different NI Linux realtime targets. It's in the works and we hope to get it ready for a release at the end of this year.