Jump to content

JKSH

Members
  • Content Count

    378
  • Joined

  • Last visited

  • Days Won

    29

Everything posted by JKSH

  1. The dialog tells you what's going on and what you can do to fix it: "The function is not declared in the header file.... Check the header file to make sure it contains declarations of the function.... Undefined symbols can prevent the wizard from recognizing functions." Which header file did you provide? Have a look inside. Does it look right? What should the correct names be?
  2. Help us help you. Please provide details on what kind of help you want: Do you want us to: Give you some ideas on how to start? Or review code that you've already written? Or write the code for you? Also, please provide details on how much LabVIEW experience you have and how much general programming experience you have.
  3. I didn't know that. Do you mean 64-bit Windows or 64-bit LabVIEW? How does 32-bit LabVIEW behave on 64-bit Windows?
  4. No. LabVIEW assumes packed data types so a U8 followed by a U32 will take 5 bytes of space, not 8 bytes. If you control the .h file, you can use #pragma pack to pack your struct. If you don't control the .h file, you can insert dummy BYTEs into your cluster.
  5. From a quick poke through the property nodes, I could only find a way to script label fonts (of FP controls, BD nodes, etc.). I couldn't find a way to set fonts at the VI level. This post suggests lots of font settings aren't available through scripting: https://forums.ni.com/t5/LabVIEW/VI-Scripting-in-LabVIEW-2012-change-font-size-of-tab-control/td-p/3011091 Expansion behaviour that depends on screen settings is something that I'd consider a bug. It will only become more prevalent as the screens become more varied in DPIs. Would you be willing to report this to NI? I doubt we'll see a change in current-gen LabVIEW, but hopefully NXG will be protected from this.
  6. You need to install the Moxa drivers on your host PC (Windows only). Then, use the Moxa NPort Adminstrator application to configure your serial ports. After that, your PC will treat the Moxa ports just like an on-board serial port or a USB-to-serial converter. This means your VI should read/write the Moxa serial ports using VISA, not TCP/IP.
  7. Hi @jmor, thanks for sharing your work. Are you planning to share the source code for the client and the server? If not, then a different license would be more suitable than the Non-Profit Open Software License 3.0 (Note: "Freeware" and "Open Software" are different things)
  8. Cross-post: https://forums.ni.com/t5/LabVIEW/URGENT-Database-in-NXG-2-0-and-2-1/td-p/3783489 To answer your question, see the "Software Compatibility" tab at http://www.ni.com/en-au/shop/labview/compare-labview-nxg-and-labview.html
  9. My apologies, I remembered wrongly and gave you wrong code. I've fixed my previous post now. The syntax is funny because: DoubleArrayBase is the struct itself. DoubleArray is a pointer to a pointer to a DoubleArray struct (yes, you read that right) Everything in the first half of my previous post (up to and including the first block of code) still applies to C++ code that reads array data from the DLL. C++ code that writes array data into LabVIEW is a bit more complex. Look in your LabVIEW-generated header file again: Do you see functions called AllocateDoubleArray() and DeAllocateDoubleArray()? // In the LabVIEW-generated header, mydll.h typedef struct { int32_t dimSizes[2]; double element[1]; } DoubleArrayBase; typedef DoubleArrayBase **DoubleArray; DoubleArray __cdecl AllocateDoubleArray (int32_t *dimSizeArr); MgErr __cdecl DeAllocateDoubleArray (DoubleArray *hdlPtr); // In your code #include "mydll.h" int main() { // Allocate and write the input array int32_t datasz[2] = {2, 3}; DoubleArray arrayIn = AllocateDoubleArray(datasz); (*arrayIn)->element[0] = 1; (*arrayIn)->element[1] = 2; (*arrayIn)->element[2] = 3; (*arrayIn)->element[3] = 11; (*arrayIn)->element[4] = 12; (*arrayIn)->element[5] = 13; // Call your function DoubleArray arrayOut; Linear_discrim_4dll(&arrayIn, &arrayOut, 2, 3); // Extract data from the output array, ASSUMING the output is 2x2 double cArray[2][2]; cArray[0][0] = (*arrayOut)->element[0]; cArray[0][1] = (*arrayOut)->element[1]; cArray[1][0] = (*arrayOut)->element[2]; cArray[1][1] = (*arrayOut)->element[3]; // Free the input array's memory DeAllocateDoubleArray(&arrayIn); // ... }
  10. See the memory layout of Arrays at http://zone.ni.com/reference/en-XX/help/371361P-01/lvconcepts/how_labview_stores_data_in_memory/ dimSizes contains the sizes of 2 dimensions. element is the interleaved array. Even though the header suggests its size is 1, its real size is dimSizes[0] * dimsizes[1]. (This technique is called the "C Struct Hack": see https://tonywearme.wordpress.com/2011/07/26/c-struct-hack/ or https://aticleworld.com/struct-hack-in-c/) If you create the 2-by-3 array in LabVIEW and pass it to your DLL, you can read it in C/C++like this: void vi1(DoubleArray *array_fromLv) { double cArray[2][3]; cArray[0][0] = (*array_fromLv)->element[0]; cArray[0][1] = (*array_fromLv)->element[1]; cArray[0][2] = (*array_fromLv)->element[2]; cArray[1][0] = (*array_fromLv)->element[3]; cArray[1][1] = (*array_fromLv)->element[4]; cArray[1][2] = (*array_fromLv)->element[5]; // Do stuff... } To pass array data from the DLL to LabVIEW, the idea is to do the opposite: void vi2(DoubleArray *array_toLv) { double cArray[2][3]; // Do stuff... (*array_toLv)->element)[0] = cArray[0][0]; (*array_toLv)->element)[1] = cArray[0][1]; (*array_toLv)->element)[2] = cArray[0][2]; (*array_toLv)->element)[3] = cArray[1][0]; (*array_toLv)->element)[4] = cArray[1][1]; (*array_toLv)->element)[5] = cArray[1][2]; } VERY IMPORTANT: Before your DLL writes any data, you must properly allocate the memory. There are 2 ways to do this: Pre-allocate the array in LabVIEW, pass this array into the DLL, and let the DLL overwrite the array contents, OR Call LabVIEW Manager functions (http://zone.ni.com/reference/en-XX/help/371361P-01/lvexcodeconcepts/labview_manager_routines/ ) to allocate or resize the array before writing the data. These functions are poorly-documented, however.
  11. That is the nature of memory corruption: Often, it doesn't cause a crash immediately. The crash happens later, when something else tries to use the corrupted memory. This is monitoring memory allocation. It helps you detect memory leaks, but doesn't detect memory corruption. They are different issues. Memory leaks cause crashes by using up all of your application's memory. Memory corruptions cause crashes by scrambling your application data.
  12. To be fair, I think DCAF is quite well-described at http://www.ni.com/white-paper/54370/en/ and http://sine.ni.com/nips/cds/view/p/lang/en/nid/213988 (Image from http://sine.ni.com/nips/cds/view/p/lang/en/nid/213988) The configuration aspect just one part of DCAF, and CVT fits inside the another part: The "Data Exchange". So, I'd say that DCAF can contain CVT functionality, but CVT doesn't do most of what DCAF can do.
  13. That's correct. That also means it is an issue for me, as I have 2 screens with different scalings.
  14. To solve both issues, insert "Adapt to Type.vi" before "Variant to Data". However, your cluster element names are now case-sensitive. "TimeStamp" will not match "timeStamp" in the JSON string.
  15. If you're considering this, then the next question is: What style of comments? @silmaril suggested YAML, which uses "# ...". JavaScript (which is where JSON came from) uses "/* ... */" and "// ..." Will you choose one to support? Will you support both (and any other style that might appear in the future)?
  16. Please understand: All of this is a lot of work! It can take several days of full-time work to finish. It is not reasonable to expect people on this forum to do your work for free. Someone needs to read through the data sheets. They might need to reverse-engineer the Cyton G15 shield hardware and/or code. Then, they need to make wires between the myRIO and the GD02, and make wires between the GD02 and the servo. They also need to write software for the myRIO to communicate with the GD02. Finally, they need to test everything. You should either hire someone to teach you how to do all of this, or hire someone to do it for you.
  17. I agree completely: Keep the Git-tracked config files separate from the deployment config files. Trying to make Git track the files and ignore the files at the same time is messy and unintuitive; if any errors occur in the process, they might be hard to detect and to fix. Some other possibilities to consider (these ideas aren't mutually exclusive; you can implement more than 1): Have your application search for config files in a "deployment" folder first. If those aren't found, then fall back to the simulation config files. This way, both deployment and development machines can run the same code yet read from different folders. This way, the "deployment" folders are untracked by Git and there's no risk of overwriting their contents. Make it visually obvious when your application is running in simulation mode (e.g. change the background colour and show a label). Deploy by building and distributing executables instead of pulling source code. It sounds like your deployment machines run the LabVIEW source code directly. How do you manage the risk of the code getting accidentally (or maliciously) modified by an operator?
  18. Not directly. One possible workaround: You could keep the .ctl as a Type Def most of the time. When you want to propagate cosmetic changes during development, temporarily Save + Apply it as a Strict Type Def. After that, change it back to a Type Def.
  19. You can use a basic state machine instead of a JKI State Machine: http://www.ni.com/tutorial/7595/en/
  20. I got an email saying that the next version of NXG is out... but nothing about 2017 SP1!
  21. (1) sounds fine. (2) could work, with caveats. You need to make sure that the data passed between LabVIEW and the 2 DLLs don't get destroyed too early. You also need to ensure that the 3rd party DLL is happy to be called from different threads, OR you make sure you only call it from the UI thread. No, I don't think your callback function can be "deallocated", unless you tell LabVIEW to unload your DLL. It has a permanent address in your DLL, after all. What kind(s) of data is transferred between the DLLs and LabVIEW? How are you ensuring that things are thread-safe?
  22. You're welcome, Fred. I see on forums.ni.com that your code is a bit different. In particular, your Signal array has 512 elements instead of 513. Which is it? You need to count accurately, or else your program might crash. Also, nathand posted more important points at forums.ni.com: You must configure the the Array to Cluster node correctly Each Array to Cluster node can only handle up to 256 elements. So, you need to duplicate its output to reach 512/513 elements.
  23. Hi, Your issue is related to data structure alignment and padding. See https://stackoverflow.com/questions/119123/why-isnt-sizeof-for-a-struct-equal-to-the-sum-of-sizeof-of-each-member By default, C/C++ compilers add padding to structs to improve memory alignment. However, LabVIEW does not add padding to clusters. So, in your DLL, the structs' memory layout is probably like this: struct Signal { uint32 nStartBit; // 4 bytes uint32 nLen; // 4 bytes double nFactor; // 8 bytes double nOffset; // 8 bytes double nMin; // 8 bytes double nMax; // 8 bytes double nValue; // 8 bytes uint64 nRawValue; // 8 bytes bool is_signed; // 1 byte char unit[11]; // 11 bytes char strName[66]; // 66 bytes char strComment[201]; // 201 bytes // 1 byte (PADDING) }; // TOTAL: 336 bytes struct Message { uint32 nSignalCount; // 4 bytes uint32 nID; // 4 bytes uint8 nExtend; // 1 byte // 3 bytes (PADDING) uint32 nSize; // 4 bytes Signal vSignals[513]; // 172368 bytes (=513*336 bytes) char strName[66]; // 66 bytes char strComment[201]; // 201 bytes // 5 bytes (PADDING) }; // TOTAL: 172656 bytes There are two ways you can make your structs and clusters compatible: If you control the DLL source code and you can compile the DLL yourself, then you can update your code to pack the structs. If your compiler is Visual Studio, add #pragma pack(): https://msdn.microsoft.com/en-us/library/2e70t5y1.aspx If your compiler is MinGW, add __attribute__((__packed__)): https://stackoverflow.com/a/4306269/1144539 If you cannot compile the DLL yourself or if you don't want to change the DLL, then you can add padding to your LabVIEW clusters. Signal: Add 1 byte (U8) to the end of the cluster Message: Add 3 bytes in between nExtend and nSize. Add 5 bytes to the end of the cluster. I must say, the Message struct is huge! (>170 KiB)
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.