Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 08/22/2022 in all areas

  1. You have two very similar threads on this forum, which makes it difficult to have a linear discussion: https://lavag.org/topic/22489-let%E2%80%99s-make-deep-learning-easy-with-haibal-library-on-labview/#comment-142891 We all agree that graphical programming has tons of advantages over pages of text code. However, most of us hate to be arm-twisted in having to pay annually for new versions of a language that fixes bug at a snail pace (if ever), release features that take 2 to 3 versions to work as advertised, and is offering abysmal level of technical support (and the list of recriminations could go on and on). Your toolkit looks truly amazing, but I would argue that you are not alone in having invested massive amount of time and effort in G-code. Yet, some of us who have, have come to the conclusion that we cannot reasonably tie our work to NI's whim and suicidal plans for LabVIEW. I have decided to migrate as fast as I can to Python. I'd argue that someone who is willing to do so (migrate to Python) will find the resources to master the basics of DL in Python, not mentioning that you'll have to keep running to stay on top of the new developments in the field with your toolkit. How you can hope to do that for free is a mystery to me. As I said, admirable, but a tad idealistic... Sorry for being he bearer of bad news, but "un homme averti en vaut deux", as the French say.
    1 point
  2. The idea is to propose not only a simple library on LabVIEW but also to propose an alternative to Keras and Pytorch. HAIBAL is a powerful modular library and more practical than python. Be able to run any deep learning graph on cuda or Xilinx FPGA platform with no particular syntaxes is really good for Ai users. We are convinced that doing artificial intelligence with LabVIEW is the way of the future. The graphics language is perfect for this. The promotional video is not aimed at LabVIEW users. It is for those who want to do deep learning in graphical language. We are addressing engineering students, professors, researchers and R&D departments of companies. LabVIEW users already know what drag and drop and data flow are. LabVIEW users do not need this kind of video.
    1 point
  3. The problem looks to be zmq_labview.c is calling functions in the zeromq library which are specified as draft functions, and are disabled in stable release versions. These functions are listed in the compilation output as implicitly declared, and when configuring the CLFN in LabVIEW, it throws a warning it can't find them. This is likely the cause of error 13. If you check zmq.h, it has the following comment: /******************************************************************************/ /* These functions are DRAFT and disabled in stable releases, and subject to */ /* change at ANY time until declared stable. */ /******************************************************************************/ #ifdef ZMQ_BUILD_DRAFT_API ... /* DRAFT Socket methods. */ ZMQ_EXPORT int zmq_join (void *s, const char *group); ZMQ_EXPORT int zmq_leave (void *s, const char *group); ... #endif If -D ZMQ_BUILD_DRAFT_API is added to the compile flags, the implicit declaration warnings are no longer present. But you still have the problem that the stable release of zeromq doesn't have these functions available. So you'll either need to find and install a draft release (these don't appear to be pre-built for Ubuntu), compile it yourself with draft APIs enabled (possibly using -DENABLE_DRAFTS=ON), or replace the functions from the draft spec in zmq_labview.c.
    1 point
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.