Jump to content

bublina

Members
  • Posts

    31
  • Joined

  • Last visited

Everything posted by bublina

  1. It has nothing to do with range. It is about the bit depth like Infinitenothing posted. Which is sort of like range. But you can only set it to discrete values (just like setting the image type), so you see the confusion. It also results in odd behavior, since the pixel depth is somehow pixel calculated, so one time 2^14 is gray, and other time it is black. Not something, that can happen with real images, but during development, some probe images can make your head spin. May the attached VI help explain myself. ImageBitDepth.vi
  2. I am not saying you should replace it. It pretty much depends on the work you do. I actually think, that if std::string starts to fail (I think the only exception it throws is connected with accessing elements outside the buffer) you have bigger fishes to fry and as long as the code runs in application level the standard containers are pretty safe. Just beware + if you do all the string formatting in what() called in catch (when the stack is gone) I think it's pretty safe. I do it like that myself. Nice to know. Most of the objects I use are new(ed) and stay out of the stack unwinding, and the rest of the objects are usually utility objects, that will "only" leak if not properly destroyed. Microsoft implemented nothrow first in VS2015. Till then it is _NOTHROW. It does indeed matter for compiler, as a void func() throw() {} will only result in the compiler not to call the unhandled exception handler, which I think is not good idea. Better to exit with some unhandled exception handler than some fireball. This is copy from MS page https://msdn.microsoft.com/en-us/library/wfa0edys(v=vs.120).aspx However, if an exception is thrown out of a function marked throw(), the Visual C++ compiler will not call unexpected (see unexpected (CRT) and unexpected (<exception>) for more information). If a function is marked with throw(), the Visual C++ compiler will assume that the function does not throw C++ exceptions and generate code accordingly. Due to code optimizations that might be performed by the C++ compiler (based on the assumption that the function does not throw any C++ exceptions) if a function does throw an exception, the program may not execute correctly. Thx for answer
  3. None of my exceptions is intended to leave the boundary of the DLL. Pretty much applies to malloc and free too. What exactly do u mean by "What LabVIEW can do however is hooking into the Windows exception mechanisme" I never heard of such thing. Only know that you can reimplement standard c++ exceptions with custom code based on SEH + replace the signal handlers for machine level exceptions. So what is the best option to do in CLFNs? Don't catch anything you don't generate. Keep error checking on "default". What compiler flag to set? (I don't think it matters) I use the default /EHsc ? I am not willing to rewrite my code to use SEH instead of standard c++ in order to be able to do any stackwalks or stuff like that just in case I ever decide to compile the code for anything but windows.
  4. This applies to windows only: I read you well. BUT!! My code can throw exceptions that are generated by NI code. Once again, I wrote the "expected answer incorrectly", because i wrote "before your code is executed" which should be any time your code is executed. The thing with CLFNs is, that if you for example attempt to execute stuff like this: int *pointer_of_doom = nullptr; *pointer_of_doom = 666; The CLFN will still report it as exception. Which is nice, since it doesn't just straight forward crash LabVIEW, when you make a pointer arithmethics error. So LabVIEW does override the signal handlers for some?! signals, and also who knows what the well documented memory manager functions may throw You write try-multicatch for all the functions you export. I just invoke them in one or more functions if needed via lambda. That's the only benefit, that you write all that try-catch stuff at just one place. Also I see you like the STD library, so beware of using the std::string or other exception capable containers in your custom exceptions, since LabVIEW overrides the terminate() handler too. You throw some exception, that is legit and has info, like if you're trying to allocate too much memory because of wrong parameters. You catch exception, string can't allocate, throws another exception in constructor, terminate() is called, and you get from LabVIEW totally non-relevant output. I had to figure this out the hard way
  5. 1) I know, I just didn't include it there. Thx for mentioning it for people who wander in here. 2) It derives from std::excception, but it can never leave the guard. 3) The code might be confusing, it does this: the ExceptionGuard_LVErr is a utility function that accepts a function object, that gets invoked in the try-catch block. I used some retard code (throw and unreachable return) but you can invoke any code inside. The "&&" tells the compiler how to pass parameters. So I can call it like this: int64_t LVflushencoder(venc *vencoder, uintptr_t *LVPkt, int32_t *GotPacket, LVErrorCluster *LVError) { return ExceptionGuard([&] { return vencoder->FlushEncoder(LVPkt, GotPacket); }, LVError); } It indeed serves as a generic layer to translate exceptions, that I fired up somwhere in the code and are supposed to be passed into this layer. So the advantage is you do not need to pass the errorcluster as an argument through all the nested functions. The code executed in the guard might have it's own try-catch or ifs that deal with exceptions and errors, that should never traverse to the top layer, but for things like: auto ptr = allocatestuff(); if(!ptr) throw("couldn not even allocate stuff, so don't bother") There is this layer. Rereading my original question, it is confusing. I was expecting an answer like: Do not catch exception X and Y, because they might be thrown even before your code is executed and you would catch them and translate to something even more confusing.
  6. I am refactoring code that works as an interface between c++ exceptions and LabVIEW error cluster. So far I have something like this (compressed to an example) typedef struct { LVBoolean status; int32_t code; LStrHandle message; } LVErrorCluster; template<typename Func> int64_t ExceptionGuard_LVErr(Func&& guardfunc, LVErrorCluster *LVError) { try { return guardfunc(); } catch (WrpExcUser &e) { return CopyErrorUS(e.whaterr(), e.what(), LVError); } return NOERROR; } //then use the guard like this in a LabVIEW CLFN call int64_t ErrorTest(LVErrorCluster *lverr) { return ExceptionGuard_LVErr([&] { throw(WrpExcUser("Some funky user error", -6426429678568)); return NOERROR; }, lverr); } But my code doesn't catch just the WrpExcUser exception, but also generic stuff like std::bad_alloc, std::bad_function_call that can occur within the guarded section of code and makes them into some user friendly messages instead of the standard "exception occured -1xxx message". Is it legal to catch these exceptions? I wanted to keep my CLFNs error checking to default.
  7. I am pretty happy myself, that I decided to "help" in this topic, otherwise I would never find out, about PostLVUse.... making a deep copy of supplied data What on earth are those orange and yellow nodes? Are you refering to "run in any thread" and "run in UI thread" for the call library nodes ? If so, how does it relate to the "NI engineer" example? The only issue I see, is that you cannot use the global DLL variable for some session based app design, but it still doesn't explain the orange and yellow nodes.
  8. Jack, can you please reply to the post #23? After rereading your suggestion, I see it is the same as mine and I just didn't understand the text after first glance. The main problem with the C<->LabVIEW is, that it is undocumented, completely. Also, is my presumption: that if I allocate a handle, wire it out of the DLL node into a indicator terminal, it will get garbage collected, correct?
  9. You are not missing anything. I posted a code, that leaks memory , it is fixed now. To be perfectly honest, I thought, that you really do not need to clean that up, since it will get a record in the global memory table and LabVIEW will clean it up. Somehow. Magically. I don't understand the space time thingy. Does it still apply after I fixed the posted code?
  10. This is actually simple task. All the proposed solution may get you to the finish line, but it all seems quite complicated. You need to figure out for yourself, if is the callback prototypes have some user parameter (void * user_param) that allow you to pass the EventRefNum, the PostLVEvent can use. If not, you need to declare it (the EventRefNum) as a variable in one of your *.c files, so it is a DLL global and write a function, that sets the DLL variable to the correct RefNum value of user event you created in labview code. I wouldn't dive into doing some memory copies into memory preallocated by LabVIEW. Just make a LabVIEW "aware" data in your DLL code. It is the cleanest solution. You do not need to dispose any handles, just make a new one for the data, show it into PostLVEvent and harvest it in event structure with native LabVIEW code. LabVIEW will take care of all cleaning up (you do not need to dispose the handle you made). If you are not familiar with making new handles, here is code to help you out! Bellow is a callback function to pass logging data into LabVIEW (exactly like you need). Important variables are: "s" (the data you wanna pass to LV) and "NewInfo" (The LabVIEW aware copy) If you need to allocate some crazy stuff for your data (the structure of the data is complex as a woman), here is some code to guide you. It deals with Array (handle) of clusters of 2 strings (handle). static void avlog_cb(void *avcl, int level, const char * szFmt, va_list varg) { static int print_prefix = 1; char s[LINE_SZ]; if (level <= g_ErrInfo.GetErrorLevel()) //post or not to post the log? { MgErr Err = mgNoErr; g_FcTb.FUNC_av_log_format_line.getPtr()(avcl, level, szFmt, varg, s, LINE_SZ, &print_prefix); //need to use coz of vsprintf is enemy with ffmpeg timeformat //length = vsprintf_s(s, n - 1, szFmt, varg); vsprintf is not compatible with some ffmpeg log formats if (g_ErrInfo.GetEventRef()) //if the UserEvent is selected for logging { LStrHandle NewInfo = nullptr; NewInfo = (LStrHandle)(DSNewHandle(Offset(LStr, str) + sizeof(uChar))); //make a new handle assert(NewInfo != nullptr); (*NewInfo)->cnt = 0; Err = LStrPrintf(NewInfo, (CStr)"LOG_DLL;%d;%s;%s", level, (avcl != nullptr ? (*(AVClass **)avcl)->class_name : ""), s); //print into handle assert(Err == mgNoErr); Err = PostLVUserEvent(g_ErrInfo.GetEventRef(), &NewInfo); //post to the log assert(Err == mgNoErr); Err = DSDisposeHandle(NewInfo); assert(Err == mgNoErr); } else //else output to Dbg window { Err = DbgPrintf("LOG_DLL;%d;%s;%s", level, (avcl != nullptr ? (*(AVClass **)avcl)->class_name : ""), s); assert(Err == mgNoErr); } } } Over here is a nice simple example from NI engineer. Makes a thread that sends simple structure to labview. Notice, that he doesn't use malloc or new to make a data copy, since PostLVUserEvent makes a deep copy of the data structure. http://forums.ni.com/t5/LabWindows-CVI/using-PostLVUserEvent-function-from-Labview-dll/td-p/2510908 typedef struct { LStrHandle key; LStrHandle value; } DictElement, **DictElementHdl; typedef struct DictionaryArr { int32_t dimsize; DictElement Arr[1]; } **DictionaryArrHdl; void PrintChar2LVStrHandle(const char *charstr, LStrHandle *StrHandleP, bool forcenew) { MgErr error = mgNoErr; std::string temp = ""; if (charstr) { temp.assign(charstr); } if ((!IsHandleEmpty(StrHandleP)) && !forcenew) { error = DSDisposeHandle(*StrHandleP); if (error != mgNoErr) throw(WrpExcUser("PrintChar2LVStrHandle()", error)); } *StrHandleP = (LStrHandle)(DSNewHandle(Offset(LStr, str) + sizeof(uChar))); if (*StrHandleP == nullptr) throw(WrpExcUser("Could not allocate string handle", CUSTOMERROR)); //(**StrHandleP)->cnt = 0; error = LStrPrintf(*StrHandleP, (CStr)"%s", temp.c_str()); if (error != mgNoErr) throw(WrpExcUser("Could not print to string handle", error)); } void FFmpegDict2LVDict(DictionaryArrHdl *LVDictArr, AVDictionary *FFDictArr) //if there are no entries to copy, this function just makes an ampty LVarray { MgErr error = mgNoErr; AVDictionaryEntry *DictE = nullptr; int count = g_FcTb.FUNC_av_dict_count.getPtr()(FFDictArr); //get entries count int i = 0; if (LVDictArr) { if (*LVDictArr) { for (i = 0; i < (**LVDictArr)->dimsize; ++i) { error = DSDisposeHandle((**LVDictArr)->Arr[i].key);//dispose disposable handles here (**LVDictArr)->Arr[i].key = nullptr;//set them to nullptr if (error != mgNoErr) throw(WrpExcUser("Error disposing LV dictionary entry handle", error)); error = DSDisposeHandle((**LVDictArr)->Arr[i].value);//and here (**LVDictArr)->Arr[i].value = nullptr; if (error != mgNoErr) throw(WrpExcUser("Error disposing LV dictionary entry handle", error)); } error = DSSetHSzClr(*LVDictArr, Offset(DictionaryArr, Arr) + sizeof(DictElement)*count);//set correct size if (error != mgNoErr) throw(WrpExcUser("Error scaling video decoder info handle", error)); } else //the array is empty ?? should not happen, means fucntion returns not used dictionary options, but none supplied { *LVDictArr = (DictionaryArrHdl)DSNewHClr(Offset(DictionaryArr, Arr) + sizeof(DictElement)*count); //make a new handle if (*LVDictArr == nullptr) throw(WrpExcUser("Error creating dictionary LV array handle", CUSTOMERROR)); } //at this point the array is ready for the dict entries to be copied i = 0; while (DictE = g_FcTb.FUNC_av_dict_get.getPtr()(FFDictArr, "", DictE, AV_DICT_IGNORE_SUFFIX)) { // iterate over all entries in FFDictArr PrintChar2LVStrHandle(DictE->key, &(**LVDictArr)->Arr[i].key, true); //copy key string PrintChar2LVStrHandle(DictE->value, &(**LVDictArr)->Arr[i].value, true); //copy value string } //at this point all the keys and values are in the LV array (**LVDictArr)->dimsize = count; //set the LV array size g_FcTb.FUNC_av_dict_free.getPtr()(&FFDictArr); //FF dict entry and FF dict arr should now be nullptr } else throw(WrpExcUser("Empty array handle pointer", INVALIDPTR)); //first dispose handles in existing LVarray //resize the LVarray to correct size //while dict_get fill the array //set correct array size //do not free the dictionary, it is done by av_dict_free }
  11. I see. Thank you. Never used this function, allways tought the bit depth is defined by the pixel type. It is a little confusing and I have honestly no idea why would extra parameter like this exist.
  12. After re-reading your problem, I guess this approach is better. Make an array that translates the control names and possible values into messages (or whatever data you use to control your program further). I use this approach as well, cool thing is that all translations are LabVIEW data, so you can make your own code to add new dictionary entries, so change to FP means you will just need to run some VI that will populate/change the array and you can script your own code to do that. Event.vi Translator.vi
  13. Mark, does this mean you have 80 controls on your FP? If this is the case for me, the controls ussually correspond to each other or not, so you put them into clusters, then you just register the value change for cluster. Once the event happens, the left Event nodes will give you Old + New value, so compare, this gives you the index of the control changed, get the control reference out of the cluster, read it's name and wire it to the case structure. Now you have 8 event cases where each has case struct with 10 cases named by the controls. Works fine for me and makes the BD more readable. Event.vi
  14. FFT is the simplest method to get from time space into frequency space. If you are looking for better resolution, that can better detect frequency bins, peaks, harmonics etc, look into to following: Non - parametric Minimum variance method, it uses a bank of FIR filters, that have the sum of gain equal to 1, so you supply the frequencies you want to look at, the method will calculate the FIR parameters and run the signal through it. Product is much better than FFT Parametric Look for: AR MA ARMA These methods replace the original signal in time with a massive polynom that either has pole roots or zero roots all combination (ARMA) and once you have a polynom, you can get as many samples you need and then run standard FFT on it. MUSIC This method allows you to input noise as parameter, it uses autocorrelation matrix, looks at the eigenvalues and removes the smallest as noise (kinda like edit: PCA method does), than takes the rest to reconstruct the clean signal and run FFT on it. If you have "Advanced signal processing toolkit" I think some of them are implemented there, so you should get nice frequency output with nice resolution in just one VI.
  15. If I display an U16 or I16 or SGL image using probe or display control, it shows pitch black for any value (they all should span 0-U16 max for U16 and SGL and 0-I16max for I16, I think), if all image pixels have the same value. Is this a bug or expected behavior? It doesn't happen with U8 image tho...
  16. I meant cpu, though after some testing, it is clear it is optimized by the compiler. Are those buffer dots reworked now? In my 2012 they are showing oddly.
  17. I have noticed, that some array operations come for free in some cases, e.g. do not cost any resources, such as: reverse, transpose, reshape. Since the array representation in memory is like: dimension1 dimension2 ... dimensionx data I can see how things like reshape are for free, but how does for example reverse happen? Does LabVIEW keep these as flags in places with data or does the compiler just know that the next loop should start from back?
  18. If reliability is the #1 factor, I recommend Advantech ADAM serial device server. It basicly converts your rs 232 device into an ethernet one and you can use native labview ethernet functions. I have had many issues with serial comms and LabVIEW including hangs, bluescreens (even on w7), all kinds of weird behavior etc. The device is quite costly, I think something around 150 euros, but I only have sane experience with it.
  19. torekp, the SVM classifier is contained in vision development module. I guess you don't have it...?
  20. Thx Jack, that is new information for me. However (also based on what RolfK wrote) i will rather implement with my own mutex (or other sync object) I have more control over. Since the PostLVUserEvent has no timeout, it would lead to many unwanted locks the LabVIEW programmer would have no clue about (would happen in DLL) and no control over.
  21. The task is to get LabVIEW data to C/C++ code, not the other way around. Fair enough. Thank you for replies.
  22. I was hoping to make a LabVIEW VI, that would eat the callback prototype as parameter and build a dll out of it in LabVIEW, then just load the code in the DLL and call that in my callback. It seems much more straight forward.
  23. How exactly do you imagine the mechanism would work? The only way I can think of and I thought of before, is this (data reading example). DLL code wants more data, it calls the callback that contains the PostLVUserEvent(argument contains struct pointer having data pointer and mutex pointer) and then locks itself on the mutex provided, it triggers the registered user event in already running LabVIEW VI, the VI then excutes a user provided Callback VI, after the call is done, it makes a new DLL call that takes care of 1st copying the data to the provided data pointer and 2nd unlocking the mutex the 1st function was waiting for. It was very complicated
  24. Hello, I am trying to figure out, if it is possible to create a native LabVIEW I/O interface to C/C++ code, that is using the I/O functions through callbacks. The scenario is as follows: I made a toolkit to process multimedia formats and (so far only) video codecs, it is using internally FFmpeg shared objects (DLLs or SOs, further just "DLLs"). They way the FFmpeg DLLs access I/O is through callback mechanism. Following all happens in the wrapper. If I want the toolkit to read data from some I/O like network or disk, I have to create a session, and part of the session are function pointers to callbacks doing various tasks, one of them is reading data. Anytime a user tries to read more data (1 data packet), he forces a call to the session demuxer, that further uses the I/O callback so long, until it parses one packet/hits EOF/corrupt data/etc..., now this works nicely if you want to provide data through standard means like video file or stream URL. Since the capabilities of FFmpeg allow for example (de)muxers and codecs for images, I thought it would be cool to give the user possibility to supply his own data, that he reads via LabVIEW from database or whatever. Not just limit the usage of I/Os implemented inside the DLLs. All I know, is that I cannot use VIs as callbacks inside C/C++ code, so the only way is to somehow "decallback" the implemented I/O mechanism.
  25. Yes, sorry, I ask a lot of question because I have no idea how is the real-time thing done by NI. Might be a better idea to actually get my hands on one of those little things and install the real-time module.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.