Jump to content

Youssef Menjour

Members
  • Posts

    80
  • Joined

  • Last visited

  • Days Won

    8

Everything posted by Youssef Menjour

  1. It's a shame because our library is really disruptive. Being able to easily integrate any DEEP model into LabVIEW architectures is a pure joy. One year to convince that is possible. One year of hardwork to develop the Graphical Deep Learning Library. One year of difficulties. One year to do it.
  2. Hello everyone, It is now time for us to communicate about the project. First of all, thank you for the support that some of you have given us during this summer. We didn't go on vacation and continued to work on the project. We didn't communicate much because we were so busy. HAIBAL will be released soon, with some delay but it will be released soon. We have solved many of our problems and are actively continuing development. The release 1 should be coming soon and we are thinking of setting up a free beta version for the community to give us feedback on the product. (What improvements would you like to see!) For the official release, we might be a little bit late because the graphic part is not advanced yet. We still have to make a website and a youtube channel for the HAIBAL project. I don't even talk about the design of the icons which has not been started yet. In short, the designer works a lot. In the meantime, here is the promotional video of HAIBAL ! See you soon the community ! Be patient, the revolution is coming
  3. Hello everyone, It is now time for us to communicate about the project. First of all, thank you for the support that some of you have given us during this summer. We didn't go on vacation and continued to work on the project. We didn't communicate much because we were so busy. HAIBAL will be released soon, with some delay but it will be released soon. We have solved many of our problems and are actively continuing development. The release 1 should be coming soon and we are thinking of setting up a free beta version for the community to give us feedback on the product. (What improvements would you like to see!) For the official release, we might be a little bit late because the graphic part is not advanced yet. We still have to make a website and a youtube channel for the HAIBAL project. I don't even talk about the design of the icons which has not been started yet. In short, the designer works a lot. In the meantime, here is the promotional video of HAIBAL ! See you soon the community ! Be patient, the revolution is coming
  4. Thank you for your help but I'm not sure thats the good way to solve this problem because if i well understood, you propose to make modification on my configuration machine to make the VI well working. It's not user friendly if i want to use MKL inside an exported library.(Or have to script it to automatize the installation) By the way i looked also about the dependancy I found MKL_intel_THREAD.2.DLL (C:\Program Files (x86)\Intel\oneAPI\mkl\2022.1.0\redist\intel64)unfortunaelly moving this DLL doesn't worked. (Well tried !) --> i suppose MKL_intel_THREAD.2.DLL is calling other DLL so i have to scan this one to know dependany etc etc ... --> maybe a better way to solve this one (edit : done uper image 🤠 --> tried and failed) Is it possible to script PATH environment variable modification to make it more acceptable ? (I already have the answers - It's yes, but my actual knowledge on this subject are low) We can inspire from DNNL library (another library inside the intel package) - Intel propose a script to fix environnent variable but when i launch it on my cmd console seems does not work. I suppose i make it bad. vars.bat
  5. It works !!! 💪 SOLVED I put sycl.dll at the same place of my called DLL and no error LabVIEW well worked !!! Dadreamer many thanks !!!!!!! 💪 SOLVED Ok let's now try to solve this one now !!
  6. I have the same problem (LabVIEW error 13) when I use a function from the MKL library (math kernel library) There must be a file dependency issue that is not loaded in the DLL runtime. The question now to solve the problem is: How to add its dependencies properly. 🤔 Another question comes to mind: Logically as it stands, my DLLs should not work if I call them with C code (this is normally independent of LabVIEW) --> I will check (need to find how to call DLL in c code) If this is the case then our solution is to include all runtime dependencies (which is of course possible - you just have to know how to do it) One thing is sure, I will have learned a lot!
  7. It seems that you are right (which reassures me because your reasoning is logical) Here is the screenshot of the "dependency walker" are there routines to integrate into my code in order to remove these objections? Error: At least one required implicit or forwarded dependency was not found. Warning: At least one delay-load dependency module was not found. As a reminder Header.h /////////////////////////////////////////////////////////////////////////////////////////////////////////// #pragma once #ifdef DPCPP_DLL_EXPORTS #define DPCPP_DLL_API __declspec(dllexport) #else #define DPCPP_DLL_API __declspec(dllimport) #endif extern "C" __declspec(dllexport) DPCPP_DLL_API int __stdcall Mult(int a); /////////////////////////////////////////////////////////////////////////////////////////////////////////// dllmain.cpp /////////////////////////////////////////////////////////////////////////////////////////////////////////// #include "pch.h" #include "Header.h" __declspec(dllexport) int Mult(int a) { return a * 3; } BOOL APIENTRY DllMain( HMODULE hModule, DWORD ul_reason_for_call, LPVOID lpReserved ) { switch (ul_reason_for_call) { case DLL_PROCESS_ATTACH: case DLL_THREAD_ATTACH: case DLL_THREAD_DETACH: case DLL_PROCESS_DETACH: break; } return TRUE; } /////////////////////////////////////////////////////////////////////////////////////////////////////////// dllmain.cpp Header.h
  8. Thanks a lot ! Unfortunattely it doesn't solve the problem. This issu is very strange !
  9. Regarding the release build, I haven't tried it because I didn't understand the difference and how to make it work. Could you be more explicit? "this Shareable board exclusively owned." is very strange
  10. 😱 😱 😱 😱 😱 Where can i specify the calling convention !!! (really sorry for the question)
  11. Hi Everybody, Intel recently released a DPC++ (data parallel c++) compiler that optimizes speeds for Intel CPUs and GPUs. My problem is that when I compile the functions with the normal Intel 2022 compiler (or the classic Visual Studio compiler) there is no problem and when I use the new intel DPC++ compiler LabVIEW returns an error. Both Intel compilers work perfectly in C and C++ under visual studio. For the exemple, i made a simple function that just multiplies an int32 by 3 and returns the result as an example. The DPC++ compiler is only under the X64 architecture and I use LabVIEW 2020. I made a video to show the problem LabVIEW DLL issue DPC++.mp4 File here: https://we.tl/t-9Iwkf1IGvr you can compile by yourself and see the problem. I added all Visual studio 2022 + Intel Compiler + Intel DPC++ compiler installers in the "install" repository. (on visual studio alt-F7 to go directly to the parameter and change the compiler - F5 to compile) Is someone can tell me what's wrong ? how I can make my DLL work with the DPC++ compiler?
  12. Thanks Rolf for the explanation (i still need to digest all). The best way to acquire experience is to experiment ! Read some of you, we feel we are manipulating a nuclear plant 😆 --> worst case labview crash (we'll survive really !!) As I am working on a project where I need time performance on array operations with CPU (read, calculate, write) ; Good news - The arrays are fixed size (no pointer realocation and no resizing) Bad news - The array can be 1D, 2D, 3D, 4D. (the access times with the LabVIEW native function palette are not satisfactory for our application --> need to find a better solution) By analogy, we suppose that the access to an array is limited on a PC as with an FPGA (on this one the physical limitation of access to the data of an array in read/write is 2 ports of reading and writing by cycle of clock whatever the size of the array). There is also the O(N) rule which says that the access time (read/write) to an array data is proportional to its size N --> I maybe wrong here In any case to increase the access time (read/write) of an array, a simple solution is to organize our data by ourselves (an array is split in several arrays (pointers) to multiply the access speed --> O(N) becomes in theory O(N/n) ) and port are multiplied by n (access time) We navigate in this "array" by addressing the right part (the right pointer). Some will say to me, but why you do not divide your table in labview and basta ? --> simply that navigating with pointers avoids unnecessary data copies on all levels and therefore makes us lose process time. We tested it, we saw a noticeable difference! In theory, doing like this is much more complex to manage but has the advantage of being faster for the reading / writing of data which are in fact the main problem Now why am I having fun with C/C++? Simply in case we can't go fast enough on some operations, in this case we transfer the data via pointers (as i told pointer well managed is the best solution - no copy ), we use C/C++ libraries like "boost" which are optimized for some operation. Moveblock is a very interesting functionnality ! So the next step is to code and test 3D,4D array and be able with only PTR primary adress to navigate very fast inside arrays (recode replace, recode index, code construct the final array) I found some old documentations and topic speaking about memory management and it helped me much. Thank you again Rolf because i saw many time some of your post helping a lot
  13. Rolf, if i well understood you, if i do that : DLL_EXPORT unsigned int* Tab1D_int_Ptr(unsigned int* ptr){ return ptr; } Data coming from LabVIEW, it mean adress memory could be released at any time by LabVIEW ? (that's logic) --> method 2 is a solution in that case (Pointer created in labview with DSNewPtr + Moveblock) I have another question, for tab, what is the difference between passing by pointer or handler. I mean we have a a struct implicitly give to the C/C++ lenght of tab with the Handler method but is there another difference ? (Ugly structure syntax 🥵) (many thanks for the exemple cordm ! 👍) Image is a VI snipped ShaunR, i'm not far to do what i want 😉 Pointeur.dll
  14. Thank you all for your information, I will be working in sequential and synchronous. There is a lot of interesting information in these different posts! Now we'll have some fun for a while!
  15. Hello ShaunR, First of all, thank you for taking the time to answer me. In view of your answer I will recontextualize the question. First of all I agree that you have to initialize and delete a pointer correctly. Here it is an example which aims to understand the management of the memory under LabVIEW. I want to understand how it works. I don't agree with your answer forbidding to manipulate a pointer. When the subject will be mastered there is no reason to be afraid of it. What I want to do is to understand how LabVIEW when declaring a variable stores it in memory. Is it strictly like in C/C++? Lets take an exemple, with an array of U8 Because in this case by manipulating the pointers properly it is interesting to declare an array variable in labview then to transmit its address to a DLL (first difficulty), manipulate as it should in C/C++ then return to labview to continue the flow. Why do I want to do this ? Because it seems (I say seems because it's probably not necessary) that LabVIEW operations are slow, to slow for my application ! As you know we are working on the development of a DeepLearning library and this one is greedy in calculation and thus it is necessary to accelerate them with libraries of multitrading C/C++ (unless to have the equivalent in LabVIEW but I doubt it for the moment). Just to give you a comparatif if we content to use LabVIEW normally we are 10 time slow as python !! Is it possible to pipeline a loop in labview ? Is it possible to merge nested loop in LabVIEW ? Finally about the data transfer, I understand perfectly that in "term of security", copy a data in the DLL to use it then to restore it to LabVIEW is tempting but the worry is in the delays of data transfer. That's what we want to avoid ! I think it's stupid to copy a data if it already exists in memory, why not use it directly ! (in condition to master the subject) The copy and transfer make us lose time. Can you please give me some answers ? Thank you very much
  16. Hi everybody, In our quest to optimize our codes we try to interface the variables declared in LabVIEW with a C/C++ processing (DLL). 1 - In my example, we had fun declaring a U32 variable in LabVIEW, then we created a pointer in C to assign it the value we wanted (copy) then we restored the value in LabVIEW. In this case everything works correctly. Here is the code in C : Hence my question, am I breaking my head unnecessarily, does my function set already exist in the LabVIEW DLL (I have a feeling that one of you will tell me...) 2 - During our second experiment (more interesting), we assign this time the address of the variable U32 declared in LabVIEW to our pointer, this time the idea is to act directly at the level of C on the variable declared in LabVIEW. We read this address, then we try to manipulate the value of this variable via the pointer in C and it does not work! Why ? or did I make a mistake in my reasoning ? This experiment aims to master the memory management of the data declared in LabVIEW at the C level. The idea would be then to do the same thing but with U32 or sgl arrays. 3 - When I declare a variable in LabVIEW, how is it managed in memory? Is it done like in C/C++? 4 - Last question, the moveblock function give me the value of a pointer (read), which function allows me to write to a pointed celled ? I put the source code as zip file DLL pointeur.zip
  17. It seems your page is not working Rolf.
  18. Thank you very much !! I will have a look it will be very usefull for us about the optimization of our execution code !
  19. Sorry guys if i feel to be a newbie but what is a "CINs" ? Another question : I remember during 2011 to see that LabVIEW had a C code generator. Do you know why this option is no more available ?
  20. ok thanks guys for all of these feedback ! It's for our HAIBAL project we will soon start optmisation of our code and i'm exploring différent possibilities. We continu the work of Hugo Always on our famous stride ! And our dream now is to finish on xilinx platform fpga. I want to prove that we can be efficient in calculation also with LabVIEW. (Maybe we will have to precompile as DLL a numerous part of our code to make it more efficient)
  21. Hello LabVIEW community, Is there a documentation about LabVIEW DLL functions. We would like to have a global view at our team in order to explore possibilities of functions (Maybe another topic already spoke about). Thanks for your help
  22. The HAIBAL toolbox will propose to the user to make his own architecture / training / prediction natively on LabVIEW. Of course we will propose natively numerous exemple like Yolo, Minst, VGG ... (that user can directly use and modify it) As our toolkit is new, we made the choice to be also fully compatible with Keras. This means that if you already have a model trained on Keras, it will be possible to import it on HAIBAL. This will also open our library to thousands of models available on the internet. (all import will traduct on HAIBAL native labVIEW code as editable by users) In this case, you will have two choices; 1 - use it on labVIEW (predict / train) 2 - Generate all the native architecture equivalent to HAIBAL (as you can see on the video) in order to modify it as you wish. HAIBAL it's more 3000 VIs, it represente a huge work and we not yet finished. We hope to release the first version this summer (with Cuda) and hope NI-FPGA optimisation to speed up inference. (Open CL and all Xilinx FPGA compatibilities will also come during 2022/2023) We are building actually our website and our youtube channel. The teams will propose tutorials (youtube/git hub) and news (website) to give visibilities for users In this video we import Keras VGG-16 model saved in HDF5 format to HAIBAL LabVIEW deep learning library. Then we can generate with our scripting the graph to allow user modify any architecture for his purpose before running it.
  23. Thank you very much for your encouragement. 😀 Yes, we can confirm that it took a lot of work and that your encouragement pushes us to do more ! We also thank you for your remarks of improvements (we are interested!! the objective of HAIBAL is to make a user friendly library) I (Youssef Menjour) have always liked LabVIEW and artificial intelligence and it was frustrating not to have an efficient tool at our disposal. We will start sharing more and more examples in the next few days.🧑‍🎓 We will also soon propose a free library to pilot a drone easily affordable on amazon because there will be with HAIBAL an example of autopilot assisted by AI of this drone (and a complete Tutorial on youtube). We are also thinking about doing the same with the "mini sheetah" type robot. In short, it will move in the next weeks, we still have a lot of work and once again your encouragement makes us really happy. LabVIEW without AI is a thing of the past. 💪 This exemple is a template state machine using HAIBAL library. It show a signal (here it's sinc) and the neural network during his training has to learn to predict this signal (here we choose 50 neurones by layers, 10 layers, layer choose is dense). This template will be proposed as basic example to understood how we initialize, train and use neural network model. This kind of "visualisation exemple" is inspired from https://playground.tensorflow.org/ help who want to start to learn deep learning.
  24. We start dev of Haibal 9 month ago with LabVIEW 2020 (last version with perpetual licence), so it will be compiled and proposed at 2020. For HDF5 we use python to import information we need. We made this choice because if you import HDF5 that mean you already use Python (From Pytorch or Keras). This is the only part using Python. All is coded in native LabVIEW. 💪
  25. Dear Community, TDF is proud to announced the coming soon release of HAIBAL library to do Deep Learning on LabVIEW. The HAIBAL project is structured in the same way as Keras. The project consists of more than 3000 VIs including, all is coded in LabVIEW native:😱😱😱 16 activations (ELU, Exponential, GELU, HardSigmoid, LeakyReLU, Linear, PReLU, ReLU, SELU, Sigmoid, SoftMax, SoftPlus, SoftSign, Swish, TanH, ThresholdedReLU), nonlinear mathematical function generally placed after each layer having weights. 84 functional layers/layers (Dense, Conv, MaxPool, RNN, Dropout, etc…). 14 loss functions (BinaryCrossentropy, BinaryCrossentropyWithLogits, Crossentropy, CrossentropyWithLogits, Hinge, Huber, KLDivergence, LogCosH, MeanAbsoluteError, MeanAbsolutePercentage, MeanSquare, MeanSquareLog, Poisson, SquaredHinge), function evaluating the prediction in relation to the target. 15 initialization functions (Constant, GlorotNormal, GlorotUniform, HeNormal, HeUniform, Identity, LeCunNormal, LeCunUniform, Ones, Orthogonal, RandomNormal, Random,Uniform, TruncatedNormal, VarianceScaling, Zeros), function initializing the weights. 7 Optimizers (Adagrad, Adam, Inertia, Nadam, Nesterov, RMSProp, SGD), function to update the weights. Currently, we are working on the full integration of Keras in compatibility HDF5 file and will start soon same job for PyTorch. (we are able to load model from and will able to save model to in the future – this part is important for us). Well obviously, Cuda is already working if you use Nvidia board and NI FPGA board will also be – not done yet. We also working on the full integration on all Xilinx Alveo system for acceleration. User will be able to do all the models he wants to do; the only limitation will be his hardware. (we will offer the same liberty as Keras or Pytorch) and in the future our company could propose Harware (Linux server with Xilinx Alveo card for exemple --> https://www.xilinx.com/products/boards-and-kits/alveo.html All full compatible Haibal !!!) About the project communication: The website will be completely redone, a Youtube channel will be set up with many tutorials and a set of known examples will be offered within the library (Yolo, Mnist, etc.). For now, we didn’t define release date, but we thought in the next July (it’s not official – we do our best to finish our product but as we are a small passionate team (we are 3 working on it) we do our best to release it soon). This work is titanic and believe me it makes us happy that you encourage us in it. (it boosts us). In short, we are doing our best to release this library as soon as possible. Still a little patience … Youtube Video : This exemple is a template state machine using HAIBAL library. It show a signal (here it's Cos) and the neural network during his training has to learn to predict this signal (here we choose 40 neurones by layers, 5 layers, layer choose is dense). This template will be proposed as basic example to understood how we initialize, train and use neural network model. This kind of "visualisation exemple" is inspired from https://playground.tensorflow.org/ help who want to start to learn deep learning.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.