Jump to content

Youssef Menjour

Members
  • Posts

    16
  • Joined

  • Last visited

  • Days Won

    5

Youssef Menjour last won the day on April 4

Youssef Menjour had the most liked content!

LabVIEW Information

  • Version
    LabVIEW 2020
  • Since
    2016

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Youssef Menjour's Achievements

Apprentice

Apprentice (3/14)

  • Collaborator Rare
  • Dedicated Rare
  • Reacting Well Rare
  • First Post Rare
  • One Month Later Rare

Recent Badges

13

Reputation

  1. Thanks Rolf for the explanation (i still need to digest all). The best way to acquire experience is to experiment ! Read some of you, we feel we are manipulating a nuclear plant 😆 --> worst case labview crash (we'll survive really !!) As I am working on a project where I need time performance on array operations with CPU (read, calculate, write) ; Good news - The arrays are fixed size (no pointer realocation and no resizing) Bad news - The array can be 1D, 2D, 3D, 4D. (the access times with the LabVIEW native function palette are not satisfactory for our application --> need to find a better solution) By analogy, we suppose that the access to an array is limited on a PC as with an FPGA (on this one the physical limitation of access to the data of an array in read/write is 2 ports of reading and writing by cycle of clock whatever the size of the array). There is also the O(N) rule which says that the access time (read/write) to an array data is proportional to its size N --> I maybe wrong here In any case to increase the access time (read/write) of an array, a simple solution is to organize our data by ourselves (an array is split in several arrays (pointers) to multiply the access speed --> O(N) becomes in theory O(N/n) ) and port are multiplied by n (access time) We navigate in this "array" by addressing the right part (the right pointer). Some will say to me, but why you do not divide your table in labview and basta ? --> simply that navigating with pointers avoids unnecessary data copies on all levels and therefore makes us lose process time. We tested it, we saw a noticeable difference! In theory, doing like this is much more complex to manage but has the advantage of being faster for the reading / writing of data which are in fact the main problem Now why am I having fun with C/C++? Simply in case we can't go fast enough on some operations, in this case we transfer the data via pointers (as i told pointer well managed is the best solution - no copy ), we use C/C++ libraries like "boost" which are optimized for some operation. Moveblock is a very interesting functionnality ! So the next step is to code and test 3D,4D array and be able with only PTR primary adress to navigate very fast inside arrays (recode replace, recode index, code construct the final array) I found some old documentations and topic speaking about memory management and it helped me much. Thank you again Rolf because i saw many time some of your post helping a lot
  2. Rolf, if i well understood you, if i do that : DLL_EXPORT unsigned int* Tab1D_int_Ptr(unsigned int* ptr){ return ptr; } Data coming from LabVIEW, it mean adress memory could be released at any time by LabVIEW ? (that's logic) --> method 2 is a solution in that case (Pointer created in labview with DSNewPtr + Moveblock) I have another question, for tab, what is the difference between passing by pointer or handler. I mean we have a a struct implicitly give to the C/C++ lenght of tab with the Handler method but is there another difference ? (Ugly structure syntax 🥵) (many thanks for the exemple cordm ! 👍) Image is a VI snipped ShaunR, i'm not far to do what i want 😉 Pointeur.dll
  3. Thank you all for your information, I will be working in sequential and synchronous. There is a lot of interesting information in these different posts! Now we'll have some fun for a while!
  4. Hello ShaunR, First of all, thank you for taking the time to answer me. In view of your answer I will recontextualize the question. First of all I agree that you have to initialize and delete a pointer correctly. Here it is an example which aims to understand the management of the memory under LabVIEW. I want to understand how it works. I don't agree with your answer forbidding to manipulate a pointer. When the subject will be mastered there is no reason to be afraid of it. What I want to do is to understand how LabVIEW when declaring a variable stores it in memory. Is it strictly like in C/C++? Lets take an exemple, with an array of U8 Because in this case by manipulating the pointers properly it is interesting to declare an array variable in labview then to transmit its address to a DLL (first difficulty), manipulate as it should in C/C++ then return to labview to continue the flow. Why do I want to do this ? Because it seems (I say seems because it's probably not necessary) that LabVIEW operations are slow, to slow for my application ! As you know we are working on the development of a DeepLearning library and this one is greedy in calculation and thus it is necessary to accelerate them with libraries of multitrading C/C++ (unless to have the equivalent in LabVIEW but I doubt it for the moment). Just to give you a comparatif if we content to use LabVIEW normally we are 10 time slow as python !! Is it possible to pipeline a loop in labview ? Is it possible to merge nested loop in LabVIEW ? Finally about the data transfer, I understand perfectly that in "term of security", copy a data in the DLL to use it then to restore it to LabVIEW is tempting but the worry is in the delays of data transfer. That's what we want to avoid ! I think it's stupid to copy a data if it already exists in memory, why not use it directly ! (in condition to master the subject) The copy and transfer make us lose time. Can you please give me some answers ? Thank you very much
  5. Hi everybody, In our quest to optimize our codes we try to interface the variables declared in LabVIEW with a C/C++ processing (DLL). 1 - In my example, we had fun declaring a U32 variable in LabVIEW, then we created a pointer in C to assign it the value we wanted (copy) then we restored the value in LabVIEW. In this case everything works correctly. Here is the code in C : Hence my question, am I breaking my head unnecessarily, does my function set already exist in the LabVIEW DLL (I have a feeling that one of you will tell me...) 2 - During our second experiment (more interesting), we assign this time the address of the variable U32 declared in LabVIEW to our pointer, this time the idea is to act directly at the level of C on the variable declared in LabVIEW. We read this address, then we try to manipulate the value of this variable via the pointer in C and it does not work! Why ? or did I make a mistake in my reasoning ? This experiment aims to master the memory management of the data declared in LabVIEW at the C level. The idea would be then to do the same thing but with U32 or sgl arrays. 3 - When I declare a variable in LabVIEW, how is it managed in memory? Is it done like in C/C++? 4 - Last question, the moveblock function give me the value of a pointer (read), which function allows me to write to a pointed celled ? I put the source code as zip file DLL pointeur.zip
  6. It seems your page is not working Rolf.
  7. Thank you very much !! I will have a look it will be very usefull for us about the optimization of our execution code !
  8. Sorry guys if i feel to be a newbie but what is a "CINs" ? Another question : I remember during 2011 to see that LabVIEW had a C code generator. Do you know why this option is no more available ?
  9. ok thanks guys for all of these feedback ! It's for our HAIBAL project we will soon start optmisation of our code and i'm exploring différent possibilities. We continu the work of Hugo Always on our famous stride ! And our dream now is to finish on xilinx platform fpga. I want to prove that we can be efficient in calculation also with LabVIEW. (Maybe we will have to precompile as DLL a numerous part of our code to make it more efficient)
  10. Hello LabVIEW community, Is there a documentation about LabVIEW DLL functions. We would like to have a global view at our team in order to explore possibilities of functions (Maybe another topic already spoke about). Thanks for your help
  11. The HAIBAL toolbox will propose to the user to make his own architecture / training / prediction natively on LabVIEW. Of course we will propose natively numerous exemple like Yolo, Minst, VGG ... (that user can directly use and modify it) As our toolkit is new, we made the choice to be also fully compatible with Keras. This means that if you already have a model trained on Keras, it will be possible to import it on HAIBAL. This will also open our library to thousands of models available on the internet. (all import will traduct on HAIBAL native labVIEW code as editable by users) In this case, you will have two choices; 1 - use it on labVIEW (predict / train) 2 - Generate all the native architecture equivalent to HAIBAL (as you can see on the video) in order to modify it as you wish. HAIBAL it's more 3000 VIs, it represente a huge work and we not yet finished. We hope to release the first version this summer (with Cuda) and hope NI-FPGA optimisation to speed up inference. (Open CL and all Xilinx FPGA compatibilities will also come during 2022/2023) We are building actually our website and our youtube channel. The teams will propose tutorials (youtube/git hub) and news (website) to give visibilities for users In this video we import Keras VGG-16 model saved in HDF5 format to HAIBAL LabVIEW deep learning library. Then we can generate with our scripting the graph to allow user modify any architecture for his purpose before running it.
  12. Thank you very much for your encouragement. 😀 Yes, we can confirm that it took a lot of work and that your encouragement pushes us to do more ! We also thank you for your remarks of improvements (we are interested!! the objective of HAIBAL is to make a user friendly library) I (Youssef Menjour) have always liked LabVIEW and artificial intelligence and it was frustrating not to have an efficient tool at our disposal. We will start sharing more and more examples in the next few days.🧑‍🎓 We will also soon propose a free library to pilot a drone easily affordable on amazon because there will be with HAIBAL an example of autopilot assisted by AI of this drone (and a complete Tutorial on youtube). We are also thinking about doing the same with the "mini sheetah" type robot. In short, it will move in the next weeks, we still have a lot of work and once again your encouragement makes us really happy. LabVIEW without AI is a thing of the past. 💪 This exemple is a template state machine using HAIBAL library. It show a signal (here it's sinc) and the neural network during his training has to learn to predict this signal (here we choose 50 neurones by layers, 10 layers, layer choose is dense). This template will be proposed as basic example to understood how we initialize, train and use neural network model. This kind of "visualisation exemple" is inspired from https://playground.tensorflow.org/ help who want to start to learn deep learning.
  13. We start dev of Haibal 9 month ago with LabVIEW 2020 (last version with perpetual licence), so it will be compiled and proposed at 2020. For HDF5 we use python to import information we need. We made this choice because if you import HDF5 that mean you already use Python (From Pytorch or Keras). This is the only part using Python. All is coded in native LabVIEW. 💪
  14. Dear Community, TDF is proud to announced the coming soon release of HAIBAL library to do Deep Learning on LabVIEW. The HAIBAL project is structured in the same way as Keras. The project consists of more than 3000 VIs including, all is coded in LabVIEW native:😱😱😱 16 activations (ELU, Exponential, GELU, HardSigmoid, LeakyReLU, Linear, PReLU, ReLU, SELU, Sigmoid, SoftMax, SoftPlus, SoftSign, Swish, TanH, ThresholdedReLU), nonlinear mathematical function generally placed after each layer having weights. 84 functional layers/layers (Dense, Conv, MaxPool, RNN, Dropout, etc…). 14 loss functions (BinaryCrossentropy, BinaryCrossentropyWithLogits, Crossentropy, CrossentropyWithLogits, Hinge, Huber, KLDivergence, LogCosH, MeanAbsoluteError, MeanAbsolutePercentage, MeanSquare, MeanSquareLog, Poisson, SquaredHinge), function evaluating the prediction in relation to the target. 15 initialization functions (Constant, GlorotNormal, GlorotUniform, HeNormal, HeUniform, Identity, LeCunNormal, LeCunUniform, Ones, Orthogonal, RandomNormal, Random,Uniform, TruncatedNormal, VarianceScaling, Zeros), function initializing the weights. 7 Optimizers (Adagrad, Adam, Inertia, Nadam, Nesterov, RMSProp, SGD), function to update the weights. Currently, we are working on the full integration of Keras in compatibility HDF5 file and will start soon same job for PyTorch. (we are able to load model from and will able to save model to in the future – this part is important for us). Well obviously, Cuda is already working if you use Nvidia board and NI FPGA board will also be – not done yet. We also working on the full integration on all Xilinx Alveo system for acceleration. User will be able to do all the models he wants to do; the only limitation will be his hardware. (we will offer the same liberty as Keras or Pytorch) and in the future our company could propose Harware (Linux server with Xilinx Alveo card for exemple --> https://www.xilinx.com/products/boards-and-kits/alveo.html All full compatible Haibal !!!) About the project communication: The website will be completely redone, a Youtube channel will be set up with many tutorials and a set of known examples will be offered within the library (Yolo, Mnist, etc.). For now, we didn’t define release date, but we thought in the next July (it’s not official – we do our best to finish our product but as we are a small passionate team (we are 3 working on it) we do our best to release it soon). This work is titanic and believe me it makes us happy that you encourage us in it. (it boosts us). In short, we are doing our best to release this library as soon as possible. Still a little patience … Youtube Video : This exemple is a template state machine using HAIBAL library. It show a signal (here it's Cos) and the neural network during his training has to learn to predict this signal (here we choose 40 neurones by layers, 5 layers, layer choose is dense). This template will be proposed as basic example to understood how we initialize, train and use neural network model. This kind of "visualisation exemple" is inspired from https://playground.tensorflow.org/ help who want to start to learn deep learning.
  15. Dear Community, The HAIBAL project is structured in the same way as Keras. The project consists of more than 3000 VIs including, all is coded in LabVIEW native:😱😱😱 16 activations (ELU, Exponential, GELU, HardSigmoid, LeakyReLU, Linear, PReLU, ReLU, SELU, Sigmoid, SoftMax, SoftPlus, SoftSign, Swish, TanH, ThresholdedReLU), nonlinear mathematical function generally placed after each layer having weights. 84 functional layers/layers (Dense, Conv, MaxPool, RNN, Dropout, etc…). 14 loss functions (BinaryCrossentropy, BinaryCrossentropyWithLogits, Crossentropy, CrossentropyWithLogits, Hinge, Huber, KLDivergence, LogCosH, MeanAbsoluteError, MeanAbsolutePercentage, MeanSquare, MeanSquareLog, Poisson, SquaredHinge), function evaluating the prediction in relation to the target. 15 initialization functions (Constant, GlorotNormal, GlorotUniform, HeNormal, HeUniform, Identity, LeCunNormal, LeCunUniform, Ones, Orthogonal, RandomNormal, Random,Uniform, TruncatedNormal, VarianceScaling, Zeros), function initializing the weights. 7 Optimizers (Adagrad, Adam, Inertia, Nadam, Nesterov, RMSProp, SGD), function to update the weights. Currently, we are working on the full integration of Keras in compatibility HDF5 file and will start soon same job for PyTorch. (we are able to load model from and will able to save model to in the future – this part is important for us). Well obviously, Cuda is already working if you use Nvidia board and NI FPGA board will also be – not done yet. We also working on the full integration on all Xilinx Alveo system for acceleration. User will be able to do all the models he wants to do; the only limitation will be his hardware. (we will offer the same liberty as Keras or Pytorch) and in the future our company could propose Harware (Linux server with Xilinx Alveo card for exemple --> https://www.xilinx.com/products/boards-and-kits/alveo.html All full compatible Haibal !!!) About the project communication: The website will be completely redone, a Youtube channel will be set up with many tutorials and a set of known examples will be offered within the library (Yolo, Mnist, etc.). For now, we didn’t define release date, but we thought in the next July (it’s not official – we do our best to finish our product but as we are a small passionate team (we are 3 working on it) we do our best to release it soon). This work is titanic and believe me it makes us happy that you encourage us in it. (it boosts us). In short, we are doing our best to release this library as soon as possible. Still a little patience … Youtube Video : This exemple is a template state machine using HAIBAL library. It show a signal (here it's Cos) and the neural network during his training has to learn to predict this signal (here we choose 40 neurones by layers, 5 layers, layer choose is dense). This template will be proposed as basic example to understood how we initialize, train and use neural network model. This kind of "visualisation exemple" is inspired from https://playground.tensorflow.org/ help who want to start to learn deep learning.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.